In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
In mathematics, a real-valued function is a function whose values are real numbers. In other words, it is a function. Many important function spaces are defined to consist of real-valued functions. Let X be an arbitrary set. Let F denote the set of all functions from X to real numbers R; because R is a field, F may be turned into a vector space and a commutative algebra over reals by adding the appropriate structure: f + g: x ↦ f + g – vector addition 0: x ↦ 0 – additive identity c f: x ↦ c f, c ∈ R – scalar multiplication f g: x ↦ f g – pointwise multiplicationAlso, since R is an ordered set, there is a partial order on F: f ≤ g ⟺ ∀ x: f ≤ g. F is a ordered ring; the σ-algebra of Borel sets is an important structure on real numbers. If X has its σ-algebra and a function f is such that the preimage f −1 of any Borel set B belongs to that σ-algebra f is said to be measurable. Measurable functions form a vector space and an algebra as explained above. Moreover, a set of real-valued functions on X can define a σ-algebra on X generated by all preimages of all Borel sets.
This is the way how σ-algebras arise in probability theory, where real-valued functions on the sample space Ω are real-valued random variables. Real numbers form a complete metric space. Continuous real-valued functions are important in theories of topological spaces and of metric spaces; the extreme value theorem states that for any real continuous function on a compact space its global maximum and minimum exist. The concept of metric space itself is defined with a real-valued function of two variables, the metric, continuous; the space of continuous functions on a compact Hausdorff space has a particular importance. Convergent sequences can be considered as real-valued continuous functions on a special topological space. Continuous functions form a vector space and an algebra as explained above, are a subclass of measurable functions because any topological space has the σ-algebra generated by open sets. Real numbers are used as the codomain to define smooth functions. A domain of a real smooth function can be the real coordinate space, a topological vector space, an open subset of them, or a smooth manifold.
Spaces of smooth functions are vector spaces and algebras as explained above, are a subclass of continuous functions. A measure on a set is a non-negative real-valued functional on a σ-algebra of subsets. Lp spaces on sets with a measure are defined from aforementioned real-valued measurable functions, although they are quotient spaces. More whereas a function satisfying an appropriate summability condition defines an element of Lp space, in the opposite direction for any f ∈ Lp and x ∈ X, not an atom, the value f is undefined. Though, real-valued Lp spaces still have some of the structure explicated above; each of Lp spaces is a vector space and have a partial order, there exists a pointwise multiplication of "functions" which changes p, namely ⋅: L 1 / α × L 1 / β → L 1 /, 0 ≤ α, β ≤ 1, α + β ≤ 1. For example, pointwise product of two L2 functions belongs to L1. Other contexts where real-valued functions and their special properties are used include monotonic functions, convex functions and subharmonic functions, analytic functions, algebraic functions, polynomials.
Real analysis Partial differential equations, a major user of real-valued functions Norm Scalar Weisstein, Eric W. "Real Function". MathWorld
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model of sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are used in a wide variety of applications, such as email filtering, computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is related to computational statistics, which focuses on making predictions using computers; the study of mathematical optimization delivers methods and application domains to the field of machine learning. Data mining is a field of study within machine learning, focuses on exploratory data analysis through unsupervised learning.
In its application across business problems, machine learning is referred to as predictive analytics. The name machine learning was coined in 1959 by Arthur Samuel. Tom M. Mitchell provided a quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we can do?". In Turing's proposal the various characteristics that could be possessed by a thinking machine and the various implications in constructing one are exposed. Machine learning tasks are classified into several broad categories.
In supervised learning, the algorithm builds a mathematical model from a set of data that contains both the inputs and the desired outputs. For example, if the task were determining whether an image contained a certain object, the training data for a supervised learning algorithm would include images with and without that object, each image would have a label designating whether it contained the object. In special cases, the input may be only available, or restricted to special feedback. Semi-supervised learning algorithms develop mathematical models from incomplete training data, where a portion of the sample input doesn't have labels. Classification algorithms and regression algorithms are types of supervised learning. Classification algorithms are used. For a classification algorithm that filters emails, the input would be an incoming email, the output would be the name of the folder in which to file the email. For an algorithm that identifies spam emails, the output would be the prediction of either "spam" or "not spam", represented by the Boolean values true and false.
Regression algorithms are named for their continuous outputs, meaning they may have any value within a range. Examples of a continuous value are the length, or price of an object. In unsupervised learning, the algorithm builds a mathematical model from a set of data which contains only inputs and no desired output labels. Unsupervised learning algorithms are used to find structure in the data, like grouping or clustering of data points. Unsupervised learning can discover patterns in the data, can group the inputs into categories, as in feature learning. Dimensionality reduction is the process of reducing the number of "features", or inputs, in a set of data. Active learning algorithms access the desired outputs for a limited set of inputs based on a budget, optimize the choice of inputs for which it will acquire training labels; when used interactively, these can be presented to a human user for labeling. Reinforcement learning algorithms are given feedback in the form of positive or negative reinforcement in a dynamic environment, are used in autonomous vehicles or in learning to play a game against a human opponent.
Other specialized algorithms in machine learning include topic modeling, where the computer program is given a set of natural language documents and finds other documents that cover similar topics. Machine learning algorithms can be used to find the unobservable probability density function in density estimation problems. Meta learning algorithms learn their own inductive bias based on previous experience. In developmental robotics, robot learning algorithms generate their own sequences of learning experiences known as a curriculum, to cumulatively acquire new skills through self-guided exploration and social interaction with humans; these robots use guidance mechanisms such as active learning, motor synergies, imitation. Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning" in 1959 while at IBM; as a scientific endeavour, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data.
They attempted to approach the problem with various symbolic methods, as well as what were termed "neural networks". Probabilistic reasoning was employed in automated medical
Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming. More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints, its feasible region is a convex polytope, a set defined as the intersection of finitely many half spaces, each of, defined by a linear inequality. Its objective function is a real-valued affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest value if such a point exists. Linear programs are problems that can be expressed in canonical form as Maximize c T x subject to A x ≤ b and x ≥ 0 where x represents the vector of variables, c and b are vectors of coefficients, A is a matrix of coefficients, T is the matrix transpose; the expression to be maximized or minimized is called the objective function.
The inequalities Ax ≤ b and x ≥ 0 are the constraints which specify a convex polytope over which the objective function is to be optimized. In this context, two vectors are comparable. If every entry in the first is less-than or equal-to the corresponding entry in the second it can be said that the first vector is less-than or equal-to the second vector. Linear programming can be applied to various fields of study, it is used in mathematics, to a lesser extent in business and for some engineering problems. Industries that use linear programming models include transportation, telecommunications, manufacturing, it has proven useful in modeling diverse types of problems in planning, scheduling and design. The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in 1827 published a method for solving them, after whom the method of Fourier–Motzkin elimination is named. In 1939 a linear programming formulation of a problem, equivalent to the general linear programming problem was given by the Soviet economist Leonid Kantorovich, who proposed a method for solving it.
It is a way he developed, during World War II, to plan expenditures and returns in order to reduce costs of the army and to increase losses imposed on the enemy. Kantorovich's work was neglected in the USSR. About the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs. Kantorovich and Koopmans shared the 1975 Nobel prize in economics. In 1941, Frank Lauren Hitchcock formulated transportation problems as linear programs and gave a solution similar to the simplex method. Hitchcock had died in 1957 and the Nobel prize is not awarded posthumously. During 1946–1947, George B. Dantzig independently developed general linear programming formulation to use for planning problems in US Air Force. In 1947, Dantzig invented the simplex method that for the first time efficiently tackled the linear programming problem in most cases; when Dantzig arranged a meeting with John von Neumann to discuss his simplex method, Neumann conjectured the theory of duality by realizing that the problem he had been working in game theory was equivalent.
Dantzig provided formal proof in an unpublished report "A Theorem on Linear Inequalities" on January 5, 1948. In the post-war years, many industries applied it in their daily planning. Dantzig's original example was to find the best assignment of 70 people to 70 jobs; the computing power required to test all the permutations to select the best assignment is vast. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the simplex algorithm; the theory behind linear programming drastically reduces the number of possible solutions that must be checked. The linear programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems. Linear programming is a used field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems.
Certain special cases of linear programming, such as network flow problems and multicommodity flow problems are considered important enough to have generated much research on specialized algorithms for their solution. A number of algorithms for other types of optimization problems work by solving LP problems as sub-problems. Ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality and the importance of convexity and its generalizations. Linear programming was used in the early formation o
A transport network, or transportation network is a realisation of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include but are not limited to road networks, air routes, pipelines and power lines. Transport network analysis is used to determine the flow of vehicles through a transport network using mathematical graph theory, it may combine different modes of transport, for example and car, to model multi-modal journeys. Transport network analysis falls within the field of transport engineering. Traffic has been studied extensively using statistical physics methods. A real transport network of Beijing was studied using network approach and percolation theory; the research showed that one can characterize the quality of traffic in a city at each time in the day using percolation threshold, see Fig. 1. Braess' paradox Flow network Heuristic routing Interplanetary Transport Network Network science Percolation theory
Signal processing is a subfield of mathematics and electrical engineering that concerns the analysis and modification of signals, which are broadly defined as functions conveying "information about the behavior or attributes of some phenomenon", such as sound and biological measurements. For example, signal processing techniques are used to improve signal transmission fidelity, storage efficiency, subjective quality, to emphasize or detect components of interest in a measured signal. According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. Oppenheim and Schafer further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s. Analog signal processing is for signals that have not been digitized, as in legacy radio, telephone and television systems; this involves linear electronic circuits as well as non-linear ones. The former are, for instance, passive filters, active filters, additive mixers and delay lines.
Non-linear circuits include compandors, voltage-controlled filters, voltage-controlled oscillators and phase-locked loops. Continuous-time signal processing is for signals; the methods of signal processing include time domain, frequency domain, complex frequency domain. This technology discusses the modeling of linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals Discrete-time signal processing is for sampled signals, defined only at discrete points in time, as such are quantized in time, but not in magnitude. Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers; this technology was a predecessor of digital signal processing, is still used in advanced processing of gigahertz signals. The concept of discrete-time signal processing refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration.
Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors. Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the Fast Fourier transform, finite impulse response filter, Infinite impulse response filter, adaptive filters such as the Wiener and Kalman filters. Nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatio-temporal domains. Nonlinear systems can produce complex behaviors including bifurcations, chaos and subharmonics which cannot be produced or analyzed using linear methods. Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks.
Statistical techniques are used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, construct techniques based on this model to reduce the noise in the resulting image. Audio signal processing – for electrical signals representing sound, such as speech or music Speech signal processing – for processing and interpreting spoken words Image processing – in digital cameras and various imaging systems Video processing – for interpreting moving pictures Wireless communication – waveform generations, filtering, equalization Control systems Array processing – for processing signals from arrays of sensors Process control – a variety of signals are used, including the industry standard 4-20 mA current loop Seismology Financial signal processing – analyzing financial data using signal processing techniques for prediction purposes. Feature extraction, such as image understanding and speech recognition. Quality improvement, such as noise reduction, image enhancement, echo cancellation.
Including audio compression, image compression, video compression. Genomics, Genomic signal processing In communication systems, signal processing may occur at: OSI layer 1 in the seven layer OSI model, the Physical Layer. Filters – for example analog or digital Samplers and analog-to-digital converters for signal acquisition and reconstruction, which involves measuring a physical signal, storing or transferring it as digital signal, later rebuilding the original signal or an approximation thereof. Signal compressors Digital signal processors Differential equations Recurrence relation Transform theory Time-frequency analysis – for processing non-stationary signals Spectral estimation – for determining the spectral content of a