Quantization (signal processing)
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a smaller set with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization forms the core of all lossy compression algorithms; the difference between an input value and its quantized value is referred to as quantization error. A device or algorithmic function that performs quantization is called a quantizer. An analog-to-digital converter is an example of a quantizer; because quantization is a many-to-few mapping, it is an inherently non-linear and irreversible process. The set of possible input values may be infinitely large, may be continuous and therefore uncountable; the set of possible output values may be countably infinite. The input and output sets involved in quantization can be defined in a rather general way.
For example, vector quantization is the application of quantization to multi-dimensional input data. An analog-to-digital converter can be modeled as two processes: quantization. Sampling converts a time-varying voltage signal into a discrete-time signal, a sequence of real numbers. Quantization replaces each real number with an approximation from a finite set of discrete values. Most these discrete values are represented as fixed-point words. Though any number of quantization levels is possible, common word-lengths are 8-bit, 16-bit and 24-bit. Quantizing a sequence of numbers produces a sequence of quantization errors, sometimes modeled as an additive random signal called quantization noise because of its stochastic behavior; the more levels a quantizer uses, the lower is its quantization noise power. Rate–distortion optimized quantization is encountered in source coding for lossy data compression algorithms, where the purpose is to manage distortion within the limits of the bit rate supported by a communication channel or storage medium.
The analysis of quantization in this context involves studying the amount of data, used to represent the output of the quantizer, studying the loss of precision, introduced by the quantization process. As an example, rounding a real number x to the nearest integer value forms a basic type of quantizer – a uniform one. A typical uniform quantizer with a quantization step size equal to some value Δ can be expressed as Q = Δ ⋅ ⌊ x Δ + 1 2 ⌋ = Δ ⋅ floor ,where the notation ⌊ ⌋ or floor depicts the floor function; the essential property of a quantizer is that it has a countable set of possible output values that has fewer members than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size Δ is equal to 1. With Δ = 1 or with Δ equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs; when the quantization step size is small relative to the variation in the signal being quantized, it is simple to show that the mean squared error produced by such a rounding operation will be Δ 2 / 12.
Mean squared error is called the quantization noise power. Adding one bit to the quantizer halves the value of Δ, which reduces the noise power by the factor ¼. In terms of decibels, the noise power change is 10 ⋅ log 10 ≈ − 6 d B; because the set of possible output values of a quantizer is countable, any quantizer can be decomposed into two distinct stages, which can be referred to as the classification stage and the reconstruction stage, where the classification stage maps the input value to an integer quantization index k and the reconstruction stage maps the index k to the reconstruction value y k, the output approx
In control engineering, a state-space representation is a mathematical model of a physical system as a set of input and state variables related by first-order differential equations or difference equations. State variables are variables whose values evolve through time in a way that depends on the values they have at any given time and depends on the externally imposed values of input variables. Output variables’ values depend on the values of the state variables; the "state space" is the Euclidean space. The state of the system can be represented as a vector within that space. To abstract from the number of inputs and states, these variables are expressed as vectors. Additionally, if the dynamical system is linear, time-invariant, finite-dimensional the differential and algebraic equations may be written in matrix form; the state-space method is characterized by significant algebraization of general system theory, which makes it possible to use Kronecker vector-matrix structures. The capacity of these structures can be efficiently applied to research systems with modulation or without it.
The state-space representation provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With p inputs and q outputs, we would otherwise have to write down q × p Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions; the state-space model is used in many different areas. In econometrics, the state-space model can be used for forecasting stock prices and numerous other variables; the internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system, n, is equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction.
It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, may provide a description of a system, stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors; the state variables defined must be linearly independent, i.e. no state variable can be written as a linear combination of the other state variables or the system will not be able to be solved. The most general state-space representation of a linear system with p inputs, q outputs and n state variables is written in the following form: x ˙ = A x + B u y = C x + D u where: x is called the "state vector", x ∈ R n.
Finite element method
The finite element method, is a numerical method for solving problems of engineering and mathematical physics. Typical problem areas of interest include structural analysis, heat transfer, fluid flow, mass transport, electromagnetic potential; the analytical solution of these problems require the solution to boundary value problems for partial differential equations. The finite element method formulation of the problem results in a system of algebraic equations; the method approximates the unknown function over the domain. To solve the problem, it subdivides a large system into smaller, simpler parts that are called finite elements; the simple equations that model these finite elements are assembled into a larger system of equations that models the entire problem. FEM uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function. Studying or analyzing a phenomenon with FEM is referred to as finite element analysis; the subdivision of a whole domain into simpler parts has several advantages: Accurate representation of complex geometry Inclusion of dissimilar material properties Easy representation of the total solution Capture of local effects.
A typical work out of the method involves dividing the domain of the problem into a collection of subdomains, with each subdomain represented by a set of element equations to the original problem, followed by systematically recombining all sets of element equations into a global system of equations for the final calculation. The global system of equations has known solution techniques, can be calculated from the initial values of the original problem to obtain a numerical answer. In the first step above, the element equations are simple equations that locally approximate the original complex equations to be studied, where the original equations are partial differential equations. To explain the approximation in this process, FEM is introduced as a special case of Galerkin method; the process, in mathematical language, is to construct an integral of the inner product of the residual and the weight functions and set the integral to zero. In simple terms, it is a procedure that minimizes the error of approximation by fitting trial functions into the PDE.
The residual is the error caused by the trial functions, the weight functions are polynomial approximation functions that project the residual. The process eliminates all the spatial derivatives from the PDE, thus approximating the PDE locally with a set of algebraic equations for steady state problems, a set of ordinary differential equations for transient problems; these equation sets are the element equations. They are linear if the underlying PDE is linear, vice versa. Algebraic equation sets that arise in the steady state problems are solved using numerical linear algebra methods, while ordinary differential equation sets that arise in the transient problems are solved by numerical integration using standard techniques such as Euler's method or the Runge-Kutta method. In step above, a global system of equations is generated from the element equations through a transformation of coordinates from the subdomains' local nodes to the domain's global nodes; this spatial transformation includes appropriate orientation adjustments as applied in relation to the reference coordinate system.
The process is carried out by FEM software using coordinate data generated from the subdomains. FEM is best understood from its practical application, known as finite element analysis. FEA as applied in engineering is a computational tool for performing engineering analysis, it includes the use of mesh generation techniques for dividing a complex problem into small elements, as well as the use of software program coded with FEM algorithm. In applying FEA, the complex problem is a physical system with the underlying physics such as the Euler-Bernoulli beam equation, the heat equation, or the Navier-Stokes equations expressed in either PDE or integral equations, while the divided small elements of the complex problem represent different areas in the physical system. FEA is a good choice for analyzing problems over complicated domains, when the domain changes, when the desired precision varies over the entire domain, or when the solution lacks smoothness. FEA simulations provide a valuable resource as they remove multiple instances of creation and testing of hard prototypes for various high fidelity situations.
For instance, in a frontal crash simulation it is possible to increase prediction accuracy in "important" areas like the front of the car and reduce it in its rear. Another example would be in numerical weather prediction, where it is more important to have accurate predictions over developing nonlinear phenomena rather than calm areas. While it is difficult to quote a date of the invention of the finite element method, the method originated from the need to solve complex elasticity and structural analysis problems in civil and aeronautical engineering, its development can be traced back to the work by R. Courant in the early 1940s. Another pioneer was Ioannis Argyris. In the USSR, the introduction of the practical application of the method is connected with name of Leonard Oganesyan. In China, in the 1950s and early 1960s, based on the computations of dam constructions, K. Feng proposed a systematic numerical method for solving partial differential equations; the method was called the finite difference method based on variation principle, another independent invention of the finite element met
In Itô calculus, the Euler–Maruyama method is a method for the approximate numerical solution of a stochastic differential equation. It is a simple generalization of the Euler method for ordinary differential equations to stochastic differential equations, it is named after Gisiro Maruyama. The same generalization cannot be done for any arbitrary deterministic method. Consider the stochastic differential equation d X t = a d t + b d W t, with initial condition X0 = x0, where Wt stands for the Wiener process, suppose that we wish to solve this SDE on some interval of time; the Euler–Maruyama approximation to the true solution X is the Markov chain Y defined as follows: partition the interval into N equal subintervals of width Δ t > 0: 0 = τ 0 < τ 1 < ⋯ < τ N = T and Δ t = T / N. The random variables ΔWn are independent and identically distributed normal random variables with expected value zero and variance Δ t. An area that has benefited from SDE is biology or more mathematical biology. Here the number of publications on the use of stochastic model grew, as most of the models are nonlinear, demanding numerical schemes.
The graphic depicts a stochastic differential equation being solved using the Euler Scheme. The deterministic counterpart is shown as well; the following Python code implements the Euler–Maruyama method and uses it to solve the Ornstein–Uhlenbeck process defined by d Y t = θ ⋅ d t + σ d W t Y 0 = Y i n i t. The random numbers for d W t are generated using the NumPy mathematics package. Milstein method Runge–Kutta method
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Numerical analysis finds application in all fields of engineering and the physical sciences, but in the 21st century the life sciences, social sciences, medicine and the arts have adopted elements of scientific computations; as an aspect of mathematics and computer science that generates and implements algorithms, the growth in power and the revolution in computing has raised the use of realistic mathematical models in science and engineering, complex numerical analysis is required to provide solutions to these more involved models of the world. Ordinary differential equations appear in celestial mechanics. Before the advent of modern computers, numerical methods depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead; these same interpolation formulas continue to be used as part of the software algorithms for solving differential equations.
One of the earliest mathematical writings is a Babylonian tablet from the Yale Babylonian Collection, which gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Being able to compute the sides of a triangle is important, for instance, in astronomy and construction. Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of the square root of 2, modern numerical analysis does not seek exact answers, because exact answers are impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors; the overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to hard problems, the variety of, suggested by the following: Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes; such simulations consist of solving partial differential equations numerically. Hedge funds use tools from all fields of numerical analysis to attempt to calculate the value of stocks and derivatives more than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices and crew assignments and fuel needs; such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis; the rest of this section outlines several important themes of numerical analysis. The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve good numerical estimates of some functions; the canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a large number of used formulas and functions and their values at many points. The function values are no longer useful when a computer is available, but the large listing of formulas can still be handy; the mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, it was found that these computers were useful for administrative purposes, but the invention of the computer influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution. In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test involving the residual, is specified in order to decide when a sufficiently accurate solution has been found. Using infinite precision arithmetic these methods would not reach the solution within a finite number of steps. Examples include Newton's method, the bisection method, Jacobi iteration. In computational matrix algebra, iterative methods are generall
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions represent physical quantities, the derivatives represent their rates of change, the equation defines a relationship between the two; because such relations are common, differential equations play a prominent role in many disciplines including engineering, physics and biology. In pure mathematics, differential equations are studied from several different perspectives concerned with their solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas. If a closed-form expression for the solution is not available, the solution may be numerically approximated using computers; the theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy. Differential equations first came into existence with the invention of calculus by Newton and Leibniz.
In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, Isaac Newton listed three kinds of differential equations: d y d x = f d y d x = f x 1 ∂ y ∂ x 1 + x 2 ∂ y ∂ x 2 = y He solves these examples and others using infinite series and discusses the non-uniqueness of solutions. Jacob Bernoulli proposed the Bernoulli differential equation in 1695; this is an ordinary differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. The problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, within ten years Euler discovered the three-dimensional wave equation; the Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point.
Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur, in which he based his reasoning on Newton's law of cooling, that the flow of heat between two adjacent molecules is proportional to the small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat; this partial differential equation is now taught to every student of mathematical physics. For example, in classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance.
The ball's acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, air resistance may be modeled as proportional to the ball's velocity; this means that the ball's acceleration, a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation and verifying its validity. Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Used distinctions include whether the equation is: Ordinary/Partial, Linear/Non-linear, Homogeneous/Inhomogeneous; this list is far from exhaustive. An ordinary differential equation is an equation containing an unknown function of one real or complex variable x, its derivatives, some
A mathematical constant is a number, "significantly interesting in some way". Constants arise in many areas of mathematics, with constants such as e and π occurring in such diverse contexts as geometry, number theory, calculus. What it means for a constant to arise "naturally", what makes a constant "interesting", is a matter of taste, some mathematical constants are notable more for historical reasons than for their intrinsic mathematical interest; the more popular constants have been studied throughout the ages and computed to many decimal places. All mathematical constants are definable numbers and are computable numbers; these are constants which one is to encounter during pre-college education in many countries. The constant π has a natural definition in Euclidean geometry, but may be found in many places in mathematics: for example, the Gaussian integral in complex analysis, the roots of unity in number theory, Cauchy distributions in probability. However, its ubiquity is not limited to pure mathematics.
It appears in many formulas in physics, several physical constants are most defined with π or its reciprocal factored out. It is debatable, however. For example, the textbook nonrelativistic ground state wave function of the hydrogen atom is ψ = 1 1 / 2 e − r / a 0, where a 0 is the Bohr radius; this formula contains a π, but it is unclear if, fundamental in a physical sense, or if it just reflects the π in the expression 4 π r 2 for the surface area of a sphere with radius r. Furthermore, this formula gives only an approximate description of physical reality, as it omits spin and the quantal nature of the electromagnetic field itself; the appearance of π in the formula for Coulomb's law in SI units is dependent on that choice of units, a historical accident having to do with how the so-called permittivity of free space was introduced into the practice of electromagnetism by Giovanni Giorgi in 1901. It is true that once various constants are chosen in one relation, the appearance of π in other relationships is unavoidable, but that appearance is always for a mathematical reason as in the example of the hydrogen atom wave function above, not a physical one.
The numeric value of π is 3.1415926535. Memorizing precise digits of π is a world record pursuit. Euler's number e known as the exponential growth constant, appears in many areas of mathematics, one possible definition of it is the value of the following expression: e = lim n → ∞ n For example, the Swiss mathematician Jacob Bernoulli discovered that e arises in compound interest: An account that starts at $1, yields interest at annual rate R with continuous compounding, will accumulate to eR dollars at the end of one year; the constant e has applications to probability theory, where it arises in a way not related to exponential growth. Suppose a slot machine with a one in n probability of winning is played n times. For large n the probability that nothing will be won is 1/e and tends to this value as n tends to infinity. Another application of e, discovered in part by Jacob Bernoulli along with French mathematician Pierre Raymond de Montmort, is in the problem of derangements known as the hat check problem.
Here n guests are invited to a party, at the door each guest checks his hat with the butler who places them into labelled boxes. The butler does not know the name of the guests, so must put them into boxes selected at random; the problem of de Montmort is: what is the probability that none of the hats gets put into the right box. The answer is p n = 1 − 1 1! + 1 2! − 1 3! + ⋯ + n 1 n! and as n tends to infinity, pn approaches 1/e. The numeric value of e is 2.7182818284. The imaginary unit or unit imaginary number, denoted as i, is a mathematical concept which extends the real number system ℝ to the complex number system ℂ, which in turn provides at least one root for every polynomial P; the imaginary unit's core property is that i2 = −1. The term "imaginary" is used. There are in fact two complex square roots of −1, namely i and −i, just as there are two complex square roots of every other real number, except zero, which has one double squ