1.
Numerical analysis
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is important, for instance, in astronomy, carpentry. Numerical analysis continues this tradition of practical mathematical calculations. Much like the Babylonian approximation of the root of 2, modern numerical analysis does not seek exact answers. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors, before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead and these same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of differential equations. Car companies can improve the safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving differential equations numerically. Hedge funds use tools from all fields of analysis to attempt to calculate the value of stocks. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments, historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use programs for actuarial analysis. The rest of this section outlines several important themes of numerical analysis, the field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago, to facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. The function values are no very useful when a computer is available. The mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of analysis, since now longer
2.
Computer simulation
–
Computer simulations reproduce the behavior of a system using a model. Simulation of a system is represented as the running of the systems model and it can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions. The scale of events being simulated by computer simulations has far exceeded anything possible using traditional paper-and-pencil mathematical modeling, other examples include a 1-billion-atom model of material deformation, a 2. Because of the computational cost of simulation, computer experiments are used to perform such as uncertainty quantification. A computer model is the algorithms and equations used to capture the behavior of the system being modeled, by contrast, computer simulation is the actual running of the program that contains these equations or algorithms. Simulation, therefore, is the process of running a model, thus one would not build a simulation, instead, one would build a model, and then either run the model or equivalently run a simulation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm, Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. The external data requirements of simulations and models vary widely, for some, the input might be just a few numbers, while others might require terabytes of information. Because of this variety, and because diverse simulation systems have common elements. Systems that accept data from external sources must be careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, often they are expressed as error bars, a minimum and maximum deviation from the value range within which the true value lie. Even small errors in the data can accumulate into substantial error later in the simulation. While all computer analysis is subject to the GIGO restriction, this is true of digital simulation. Indeed, observation of this inherent, cumulative error in digital systems was the main catalyst for the development of chaos theory, another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two classes, Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category, if the underlying graph is not a regular grid, the model may belong to the meshfree method class. Equations define the relationships between elements of the system and attempt to find a state in which the system is in equilibrium. Such models are used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted
3.
Scientific visualization
–
Scientific visualization is an interdisciplinary branch of science. It is also considered a subset of computer graphics, a branch of computer science, the purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand, illustrate, and glean insight from their data. One of the earliest examples of scientific visualisation was Maxwells thermodynamic surface. This prefigured modern scientific techniques that use computer graphics. Scientific visualization using computer graphics gained in popularity as graphics matured, primary applications were scalar fields and vector fields from computer simulations and also measured data. The primary methods for visualizing two-dimensional scalar fields are color mapping and drawing contour lines, 2D vector fields are visualized using glyphs and streamlines or line integral convolution methods. For 3D scalar fields the primary methods are volume rendering and isosurfaces, methods for visualizing vector fields include glyphs such as arrows, streamlines and streaklines, particle tracing, line integral convolution and topological methods. Later, visualization techniques such as hyperstreamlines were developed to visualize 2D, computer animation is the art, technique, and science of creating moving images via the use of computers. It is becoming common to be created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium and it is also referred to as CGI, especially when used in films. Computer simulation is a program, or network of computers. The simultaneous visualization and simulation of a system is called visulation, computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for months. Information visualization focused on the creation of approaches for conveying information in intuitive ways. The key difference between scientific visualization and information visualization is that information visualization is often applied to data that is not generated by scientific inquiry, some examples are graphical representations of data for business, government, news and social media. Interface technology and perception shows how new interfaces and an understanding of underlying perceptual issues create new opportunities for the scientific visualization community. Rendering is the process of generating an image from a model, the model is a description of three-dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, the image is a digital image or raster graphics image. The term may be by analogy with a rendering of a scene
4.
Morse/Long-range potential
–
Owing to the simplicity of the Morse potential, it is not used in modern spectroscopy. The MLR potential is a version of the Morse potential which has the correct theoretical long-range form of the potential naturally built into it. Cases of particular note include, the c-state of Li2, where the MLR potential was able to bridge a gap of more than 5000 cm−1 in experimental data. Two years later it was found that Dattanis MLR potential was able to predict the energies in the middle of this gap. The accuracy of these predictions was much better than the most sophisticated ab initio techniques at the time and this lithium oscillator strength is related to the radiative lifetime of atomic lithium and is used as a benchmark for atomic clocks and measurements of fundamental constants. It has been said that work by Le Roy et al. was a landmark in diatomic spectral analysis. The a-state of KLi, where a global potential was successfully built despite there only being a small amount of data near the top of the potential. The MLR potential is based on the classic Morse potential which was first introduced in 1929 by Philip M. Morse, a primitive version of the MLR potential was first introduced in 2006 by professor Robert J. Le Roy and colleagues for a study on N2. This primitive form was used on Ca2, KLi and MgH, before the modern version was introduced in 2009 by Le Roy, Dattani. A further extension of the MLR potential referred to as the MLR3 potential was introduced in a 2010 study of Cs2, and it is clear to see that, lim r → ∞ = y n r x, so lim r → ∞ β = β ∞. More sophisticated versions are used for polyatomic molecules, examples of molecules for which the MLR has been used to represent ab initio points are KLi, KBe
5.
Lennard-Jones potential
–
The Lennard-Jones potential is a mathematically simple model that approximates the interaction between a pair of neutral atoms or molecules. A form of this potential was first proposed in 1924 by John Lennard-Jones. At rm, the function has the value −ε. The distances are related as rm = 21/6σ ≈1. 122σ and these parameters can be fitted to reproduce experimental data or accurate quantum chemistry calculations. Due to its simplicity, the Lennard-Jones potential is used extensively in computer simulations even though more accurate potentials exist. Differentiating the L-J potential with respect to r gives an expression for the net inter-molecular force between 2 molecules and this inter-molecular force may be attractive or repulsive, depending on the value of r. When r is very small, the 2 molecules repel each other, whereas the functional form of the attractive term has a clear physical justification, the repulsive term has no theoretical justification. It is used because it approximates the Pauli repulsion well, and is convenient due to the relative computing efficiency of calculating r12 as the square of r6. The Lennard-Jones potential was improved by the Buckingham potential later proposed by R. A. Buckingham, in which the part is an exponential function. The L-J potential is a good approximation. Due to its simplicity, it is used to describe the properties of gases. It is especially accurate for noble gas atoms, and is an approximation at long and short distances for neutral atoms. The lowest energy arrangement of a number of atoms described by a Lennard-Jones potential is a hexagonal close-packing. On raising temperature, the lowest free energy arrangement becomes cubic close packing, under pressure, the lowest energy structure switches between cubic and hexagonal close packing. Real materials include BCC structures also, other more recent methods, such as the Stockmayer potential, describe the interaction of molecules more accurately. Quantum chemistry methods, Møller–Plesset perturbation theory, coupled cluster method, or full configuration interaction can give accurate results. There are many different ways to formulate the Lennard-Jones potential. This form is a formulation that is used by some simulation software, V L J = A r 12 − B r 6
6.
Yukawa potential
–
The potential is monotone increasing in r and it is negative, implying the force is attractive. In the SI system, the unit of the Yukawa potential is, the Coulomb potential of electromagnetism is an example of a Yukawa potential with e−kmr equal to 1 everywhere. This can be interpreted as saying that the mass m is equal to 0. In interactions between a field and a fermion field, the constant g is equal to the gauge coupling constant between those fields. In the case of the force, the fermions would be a proton. Hideki Yukawa showed in the 1930s that such a potential arises from the exchange of a scalar field such as the field of a massive boson. Since the field mediator is massive the corresponding force has a certain range, if the mass is zero, then the Yukawa potential equals a Coulomb potential, and the range is said to be infinite. In fact, we have, m =0 ⇒ e − m r = e 0 =1, consequently, the equation V Yukawa = − g 2 e − m r r simplifies to the form of the Coulomb potential V Coulomb = − g 21 r. A comparison of the long range potential strength for Yukawa and Coulomb is shown in Figure 2 and it can be seen that the Coulomb potential has effect over a greater distance whereas the Yukawa potential approaches zero rather quickly. However, any Yukawa potential or Coulomb potential are non-zero for any large r, the easiest way to understand that the Yukawa potential is associated with a massive field is by examining its Fourier transform. One has V = − g 23 ∫ e i k ⋅ r 4 π k 2 + m 2 d 3 k where the integral is performed all possible values of the 3-vector momentum k. In this form, the fraction 4 π / is seen to be the propagator or Greens function of the Klein–Gordon equation, the Yukawa potential can be derived as the lowest order amplitude of the interaction of a pair of fermions. The Yukawa interaction couples the fermion field ψ to the meson field ϕ with the coupling term L i n t = g ψ ¯ ϕ ψ. The scattering amplitude for two fermions, one with initial momentum p 1 and the other with momentum p 2, exchanging a meson with momentum k, is given by the Feynman diagram on the right. The Feynman rules for each associate a factor of g with the amplitude. The line in the middle, connecting the two lines, represents the exchange of a meson. The Feynman rule for an exchange is to use the propagator. Thus, we see that the Feynman amplitude for this graph is nothing more than V = − g 24 π k 2 + m 2, from the previous section, this is seen to be the Fourier transform of the Yukawa potential
7.
Morse potential
–
The Morse potential, named after physicist Philip M. Morse, is a convenient interatomic interaction model for the potential energy of a diatomic molecule. It is an approximation for the vibrational structure of the molecule than the QHO because it explicitly includes the effects of bond breaking. It also accounts for the anharmonicity of real bonds and the transition probability for overtone. The Morse potential can also be used to other interactions such as the interaction between an atom and a surface. Due to its simplicity, it is not used in modern spectroscopy, however, its mathematical form inspired the MLR potential, which is the most popular potential energy function used for fitting spectroscopic data. The dissociation energy of the bond can be calculated by subtracting the zero point energy E from the depth of the well. Since the zero of energy is arbitrary, the equation for the Morse potential can be rewritten any number of ways by adding or subtracting a constant value. This form approaches zero at r and equals − D e at its minimum. It clearly shows that the Morse potential is the combination of a short-range repulsion term, like the quantum harmonic oscillator, the energies and eigenstates of the Morse potential can be found using operator methods. One approach involves applying the method to the Hamiltonian. Whereas the energy spacing between levels in the quantum harmonic oscillator is constant at h ν0, the energy between adjacent levels decreases with increasing v in the Morse oscillator. Mathematically, the spacing of Morse levels is E − E = h ν0 −2 /2 D e and this trend matches the anharmonicity found in real molecules. However, this equation fails above some value of v m where E − E is calculated to be zero or negative, specifically, v m =2 D e − h ν0 h ν0. This failure is due to the number of bound levels in the Morse potential. For energies above v m, all the energy levels are allowed. Below v m, E is an approximation for the true vibrational structure in non-rotating diatomic molecules. An important extension of the Morse potential that made the Morse form very useful for spectroscopy is the MLR potential. The MLR potential is used as a standard for representing spectroscopic and/or virial data of diatomic molecules by a potential energy curve
8.
Computational fluid dynamics
–
Computational fluid dynamics is a branch of fluid mechanics that uses numerical analysis and data structures to solve and analyze problems that involve fluid flows. Computers are used to perform the required to simulate the interaction of liquids. With high-speed supercomputers, better solutions can be achieved, ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial experimental validation of such software is performed using a tunnel with the final validation coming in full-scale testing. The fundamental basis of almost all CFD problems is the Navier–Stokes equations and these equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations, finally, for small perturbations in subsonic and supersonic flows these equations can be linearized to yield the linearized potential equations. Historically, methods were first developed to solve the potential equations. Two-dimensional methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s. One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardsons book Weather prediction by numerical process, set the basis for modern CFD, in fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardsons 1922 book. The computer power available paced development of three-dimensional methods, probably the first work using computers to model fluid flow, as governed by the Navier-Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is considered as one of the pioneers of CFD. Fromms vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world, the first paper with three-dimensional model was published by John Hess and A. M. O. Smith of Douglas Aircraft in 1967 and this method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls, the first lifting Panel Code was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing, Lockheed, Douglas, McDonnell Aircraft, NASA, some were higher order codes, using higher order distributions of surface singularities, while others used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time, today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of submarines, surface ships, automobiles, helicopters, aircraft
9.
Finite difference method
–
Today, FDMs are the dominant approach to numerical solutions of partial differential equations. First, assuming the function whose derivatives are to be approximated is properly-behaved, by Taylors theorem, we can create a Taylor Series expansion f = f + f ′1. H n + R n, where n. denotes the factorial of n, the error in a methods solution is defined as the difference between the approximation and the exact analytical solution. To use a finite difference method to approximate the solution to a problem and this is usually done by dividing the domain into a uniform grid. Note that this means that finite-difference methods produce sets of numerical approximations to the derivative. An expression of general interest is the truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from an application of a method. That is, it is the quantity f ′ − f i ′ if f ′ refers to the exact value, the remainder term of a Taylor polynomial is convenient for analyzing the local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for f, N +1, where x 0 < ξ < x 0 + h, the dominant term of the local truncation error can be discovered. For example, again using the formula for the first derivative. 2, and with some algebraic manipulation, this leads to f − f i h = f ′ + f ″2, a final expression of this example and its order is, f − f i h = f ′ + O. This means that, in case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection, the data quality and simulation duration increase significantly with smaller step size. Therefore, a balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing speed in practice. However, time steps which are too large may create instabilities, the von Neumann method is usually applied to determine the numerical model stability. For example, consider the differential equation u ′ =3 u +2. The last equation is an equation, and solving this equation gives an approximate solution to the differential equation
10.
Finite volume method
–
The finite volume method is a method for representing and evaluating partial differential equations in the form of algebraic equations. Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry, Finite volume refers to the small volume surrounding each node point on a mesh. In the finite volume method, volume integrals in a differential equation that contain a divergence term are converted to surface integrals. These terms are then evaluated as fluxes at the surfaces of finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, another advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes. The method is used in computational fluid dynamics packages. Consider a simple 1D advection problem defined by the partial differential equation ∂ ρ ∂ t + ∂ f ∂ x =0, t ≥0. Here, ρ = ρ represents the variable and f = f represents the flux or flow of ρ. Conventionally, positive f represents flow to the right while negative f represents flow to the left, if we assume that equation represents a flowing medium of constant area, we can sub-divide the spatial domain, x, into finite volumes or cells with cell centres indexed as i. Integrating equation in time, we have, ρ = ρ − ∫ t 1 t 2 f x d t and we assume that f is well behaved and that we can reverse the order of integration. Also, recall that flow is normal to the area of the cell. Equation is exact for the averages, i. e. no approximations have been made during its derivation. This method can also be applied to a 2D situation by considering the north and south faces along with the east and west faces around a node. We can also consider the conservation law problem, represented by the following PDE. Here, u represents a vector of states and f represents the corresponding flux tensor, again we can sub-divide the spatial domain into finite volumes or cells. For a particular cell, i, we take the integral over the total volume of the cell, v i. So, finally, we are able to present the general equivalent to. Again, values for the fluxes can be reconstructed by interpolation or extrapolation of the cell averages
11.
Finite element method
–
The finite element method is a numerical method for solving problems of engineering and mathematical physics. It is also referred to as finite element analysis, typical problem areas of interest include structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. The analytical solution of these problems generally require the solution to boundary value problems for partial differential equations, the finite element method formulation of the problem results in a system of algebraic equations. The method yields approximate values of the unknowns at discrete number of points over the domain, to solve the problem, it subdivides a large problem into smaller, simpler parts that are called finite elements. The simple equations that model these finite elements are assembled into a larger system of equations that models the entire problem. FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function, the global system of equations has known solution techniques, and can be calculated from the initial values of the original problem to obtain a numerical answer. To explain the approximation in this process, FEM is commonly introduced as a case of Galerkin method. The process, in language, is to construct an integral of the inner product of the residual. In simple terms, it is a procedure that minimizes the error of approximation by fitting trial functions into the PDE, the residual is the error caused by the trial functions, and the weight functions are polynomial approximation functions that project the residual. These equation sets are the element equations and they are linear if the underlying PDE is linear, and vice versa. In step above, a system of equations is generated from the element equations through a transformation of coordinates from the subdomains local nodes to the domains global nodes. This spatial transformation includes appropriate orientation adjustments as applied in relation to the coordinate system. The process is carried out by FEM software using coordinate data generated from the subdomains. FEM is best understood from its application, known as finite element analysis. FEA as applied in engineering is a tool for performing engineering analysis. It includes the use of mesh generation techniques for dividing a complex problem into small elements, FEA is a good choice for analyzing problems over complicated domains, when the domain changes, when the desired precision varies over the entire domain, or when the solution lacks smoothness. For instance, in a crash simulation it is possible to increase prediction accuracy in important areas like the front of the car. Another example would be in weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena rather than relatively calm areas
12.
Riemann solver
–
A Riemann solver is a numerical method used to solve a Riemann problem. They are heavily used in fluid dynamics and computational magnetohydrodynamics. Godunov is credited with introducing the first exact Riemann solver for the Euler equations, modern solvers are able to simulate relativistic effects and magnetic fields. For the hydrodynamic case latest research showed the possibility to avoid the iterations to calculate the exact solution for the Euler equations. As iterative solutions are too costly, especially in Magnetohydrodynamics, some approximations have to be made, the most popular solvers are, Roe used the linearisation of the Jacobian, which he then solves exactly. The HLLE solver is a solution to the Riemann problem, which is only based on the integral form of the conservation laws. The stability and robustness of the HLLE solver is closely related to the signal velocities, the description of the HLLE scheme in the book mentioned below is incomplete and partially wrong. The reader is referred to the original paper, actually, the HLLE scheme is based on a new stability theory for discontinuities in fluids, which was never published. The HLLC solver was introduced by Toro and it restores the missing Rarefaction wave by some estimates, like linearisations, these can be simple but also more advanced exists like using the Roe average velocity for the middle wave speed. They are quite robust and efficient but somewhat more diffusive and these solvers were introduced by Nishikawa and Kitamura, in order to overcome the carbuncle problems of the Roe solver and the excessive diffusion of the HLLE solver at the same time. In particular, the one derived from the Roe and HLLE solvers, called Rotated-RHLL solver, is extremely robust, godunovs scheme Computational fluid dynamics Computational magnetohydrodynamics Toro, Eleuterio F. Riemann Solvers and Numerical Methods for Fluid Dynamics, Berlin, Springer Verlag, ISBN 3-540-65966-8
13.
Smoothed-particle hydrodynamics
–
Smoothed-particle hydrodynamics is a computational method used for simulating the dynamics of continuum media, such as solid mechanics and fluid flows. It was developed by Gingold and Monaghan and Lucy initially for astrophysical problems and it has been used in many fields of research, including astrophysics, ballistics, volcanology, and oceanography. It is a mesh-free Lagrangian method, and the resolution of the method can easily be adjusted with respect to such as the density. The smoothed-particle hydrodynamics method works by dividing the fluid into a set of discrete elements and these particles have a spatial distance, over which their properties are smoothed by a kernel function. This means that the quantity of any particle can be obtained by summing the relevant properties of all the particles which lie within the range of the kernel. For example, using Monaghans popular cubic spline kernel the temperature at position r depends on the temperatures of all the particles within a radial distance 2 h of r. The contributions of each particle to a property are weighted according to their distance from the particle of interest, mathematically, this is governed by the kernel function. Kernel functions commonly used include the Gaussian function and the cubic spline, the latter function is exactly zero for particles further away than two smoothing lengths. This has the advantage of saving computational effort by not including the relatively minor contributions from distant particles, similarly, the spatial derivative of a quantity can be obtained easily by virtue of the linearity of the derivative. ∇ A = ∑ j m j A j ρ j ∇ W, although the size of the smoothing length can be fixed in both space and time, this does not take advantage of the full power of SPH. By assigning each particle its own smoothing length and allowing it to vary with time, for example, in a very dense region where many particles are close together the smoothing length can be made relatively short, yielding high spatial resolution. Conversely, in low-density regions where particles are far apart and the resolution is low. Combined with an equation of state and an integrator, SPH can simulate hydrodynamic flows efficiently, however, the traditional artificial viscosity formulation used in SPH tends to smear out shocks and contact discontinuities to a much greater extent than state-of-the-art grid-based schemes. The Lagrangian-based adaptivity of SPH is analogous to the adaptivity present in grid-based adaptive mesh refinement codes, in some ways it is actually simpler because SPH particles lack any explicit topology relating them, unlike the elements in FEM. Adaptivity in SPH can be introduced in two ways, either by changing the particle smoothing lengths or by splitting SPH particles into daughter particles with smaller smoothing lengths, the first method is common in astrophysical simulations where the particles naturally evolve into states with large density differences. However, in hydrodynamics simulations where the density is constant this is not a suitable method for adaptivity. For this reason particle splitting can be employed, with conditions for splitting ranging from distance to a free surface through to material shear. Often in astrophysics, one wishes to model self-gravity in addition to pure hydrodynamics, the particle-based nature of SPH makes it ideal to combine with a particle-based gravity solver, for instance tree gravity code, particle mesh, or particle-particle particle-mesh
14.
Monte Carlo method
–
Monte Carlo methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. Their essential idea is using randomness to solve problems that might be deterministic in principle and they are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are used in three distinct problem classes, optimization, numerical integration, and generating draws from a probability distribution. In principle, Monte Carlo methods can be used to any problem having a probabilistic interpretation. By the law of numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean of independent samples of the variable. When the probability distribution of the variable is parametrized, mathematicians often use a Markov Chain Monte Carlo sampler, the central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired distribution, by the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler. In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation, in other instances we are given a flow of probability distributions with an increasing level of sampling complexity. These models can also be seen as the evolution of the law of the states of a nonlinear Markov chain. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methodologies these mean field particle techniques rely on sequential interacting samples, the terminology mean field reflects the fact that each of the samples interacts with the empirical measures of the process. Monte Carlo methods vary, but tend to follow a particular pattern, generate inputs randomly from a probability distribution over the domain. Perform a deterministic computation on the inputs, for example, consider a circle inscribed in a unit square. Given that the circle and the square have a ratio of areas that is π/4, uniformly scatter objects of uniform size over the square. Count the number of objects inside the circle and the number of objects. The ratio of the two counts is an estimate of the ratio of the two areas, which is π/4, multiply the result by 4 to estimate π. In this procedure the domain of inputs is the square that circumscribes our circle and we generate random inputs by scattering grains over the square then perform a computation on each input. Finally, we aggregate the results to obtain our final result, there are two important points to consider here, Firstly, if the grains are not uniformly distributed, then our approximation will be poor. Secondly, there should be a number of inputs
15.
Monte Carlo integration
–
In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid and this method is particularly useful for higher-dimensional integrals. There are different methods to perform a Monte Carlo integration, such as sampling, stratified sampling, importance sampling, Sequential Monte Carlo. In numerical integration, methods such as the Trapezoidal rule use a deterministic approach, Monte Carlo integration, on the other hand, employs a non-deterministic approach, each realization provides a different outcome. In Monte Carlo, the outcome is an approximation of the correct value with respective error bars. This is because the law of large numbers ensures that lim N → ∞ Q N = I, given the estimation of I from QN, the error bars of QN can be estimated by the sample variance using the unbiased estimate of the variance. V a r ≡ σ N2 =1 N −1 ∑ i =1 N2. Which leads to V a r = V2 N2 ∑ i =1 N V a r = V2 V a r N = V2 σ N2 N, as long as the sequence is bounded, this variance decreases asymptotically to zero as 1/N. The estimation of the error of QN is thus δ Q N ≈ V a r = V σ N N and this is standard error of the mean multiplied with V. While the naive Monte Carlo works for simple examples, this is not the case in most problems, a large part of the Monte Carlo literature is dedicated in developing strategies to improve the error estimates. In particular, stratified sampling—dividing the region in sub-domains—, and importance sampling—sampling from non-uniform distributions—are two of such techniques, a paradigmatic example of a Monte Carlo integration is the estimation of π. Consider the function H = {1 if x 2 + y 2 ≤10 else, notice that I π = ∫ Ω H d x d y = π. Keep in mind that a random number generator should be used. On each recursion step the integral and the error are estimated using a plain Monte Carlo algorithm, if the error estimate is larger than the required accuracy the integration volume is divided into sub-volumes and the procedure is recursively applied to sub-volumes. The ordinary dividing by two strategy does not work for multi-dimensions as the number of sub-volumes grows far too quickly to keep track, instead one estimates along which dimension a subdivision should bring the most dividends and only subdivides the volume along this dimension. The popular MISER routine implements a similar algorithm, the MISER algorithm is based on recursive stratified sampling. This technique aims to reduce the overall integration error by concentrating integration points in the regions of highest variance, the MISER algorithm proceeds by bisecting the integration region along one coordinate axis to give two sub-regions at each step
16.
N-body simulation
–
In physics and astronomy, an N-body simulation is a simulation of a dynamical system of particles, usually under the influence of physical forces, such as gravity. In physical cosmology, N-body simulations are used to study processes of non-linear structure formation such as galaxy filaments, direct N-body simulations are used to study the dynamical evolution of star clusters. The particles treated by the simulation may or may not correspond to objects which are particulate in nature. For example, an N-body simulation of a star cluster might have a particle per star and this quantity need not have any physical significance, but must be chosen as a compromise between accuracy and manageable computer requirements. These calculations are used in situations where interactions between objects, such as stars or planets, are important to the evolution of the system. The first direct N-body simulations were carried out by Sebastian von Hoerner at the Astronomisches Rechen-Institut in Heidelberg, regularization is a mathematical trick to remove the singularity in the Newtonian law of gravitation for two particles which approach each other arbitrarily close. Sverre Aarseths codes are used to study the dynamics of star clusters, planetary systems, many simulations are large enough that the effects of general relativity in establishing a Friedmann-Lemaitre-Robertson-Walker cosmology are significant. This is incorporated in the simulation as a measure of distance in a comoving coordinate system. The boundary conditions of these simulations are usually periodic, so that one edge of the simulation volume matches up with the opposite edge. N-body simulations are simple in principle, because they merely involve integrating the 6N ordinary differential equations defining the particle motions in Newtonian gravity, therefore, a number of refinements are commonly used. There are two basic approximation schemes to decrease the time for such simulations. These can reduce the complexity to O or better, at the loss of accuracy. This can dramatically reduce the number of particle pair interactions that must be computed, for simulations where particles are not evenly distributed, the well-separated pair decomposition methods of Callahan and Kosaraju yield optimal O time per iteration with fixed dimension. The gravitational field can now be found by multiplying by k →, since this method is limited by the mesh size, in practice a smaller mesh or some other technique is used to compute the small-scale forces. Sometimes an adaptive mesh is used, in which the cells are much smaller in the denser regions of the simulation. Several different gravitational perturbation algorithms are used to get accurate estimates of the path of objects in the solar system. People often decide to put a satellite in a frozen orbit and it is possible to find a frozen orbit without calculating the actual path of the satellite. Some characteristics of the paths of a system of particles can be calculated directly
17.
Molecular dynamics
–
Molecular dynamics is a computer simulation method for studying the physical movements of atoms and molecules, and is thus a type of N-body simulation. The atoms and molecules are allowed to interact for a period of time. The method was developed within the field of theoretical physics in the late 1950s but is applied today mostly in chemical physics, materials science. Following the earlier successes of Monte Carlo simulations, the method was developed by Fermi, Pasta, in 1957, Alder and Wainwright used an IBM704 computer to simulate perfectly elastic collisions between hard spheres. In 1960, Gibson et al. simulated radiation damage of solid copper by using a Born-Mayer type of repulsive interaction along with a surface force. In 1964, Rahman published landmark simulations of liquid argon that used a Lennard-Jones potential, calculations of system properties, such as the coefficient of self-diffusion, compared well with experimental data. Even before it became possible to simulate molecular dynamics with computers, the idea was to arrange them to replicate the properties of a liquid. I took a number of balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inches. I tried to do this in the first place as casually as possible, working in my own office, being interrupted every five minutes or so and not remembering what I had done before the interruption. In physics, MD is used to examine the dynamics of atomic-level phenomena that cannot be observed directly, such as thin film growth and it is also used to examine the physical properties of nanotechnological devices that have not been or cannot yet be created. In principle MD can be used for ab initio prediction of protein structure by simulating folding of the chain from random coil. The results of MD simulations can be tested through comparison to experiments that measure molecular dynamics, michael Levitt, who shared the Nobel Prize awarded in part for the application of MD to proteins, wrote in 1999 that CASP participants usually did not use the method due to. A central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure, limits of the method are related to the parameter sets used, and to the underlying molecular mechanics force fields. The neglected contributions include the conformational entropy of the polypeptide chain, Another important factor are intramolecular hydrogen bonds, which are not explicitly included in modern force fields, but described as Coulomb interactions of atomic point charges. This is an approximation because hydrogen bonds have a partially quantum mechanical and chemical nature. Furthermore, electrostatic interactions are calculated using the dielectric constant of vacuum. Using the macroscopic dielectric constant at short distances is questionable. Finally, van der Waals interactions in MD are usually described by Lennard-Jones potentials based on the Fritz London theory that is applicable in vacuum
18.
Sergei K. Godunov
–
Sergei Konstantinovich Godunov is professor at the Sobolev Institute of Mathematics of the Russian Academy of Sciences in Novosibirsk, Russia. Professor Godunovs most influential work is in the area of applied and it has had a major impact on science and engineering, particularly in the development of methodologies used in Computational Fluid Dynamics and other computational fields. On 1–2 May 1997 a symposium entitled, Godunov-type numerical methods, was held at the University of Michigan to honour Godunov and these methods are widely used to compute continuum processes dominated by wave propagation. On the following day,3 May, Godunov received a degree from the University of Michigan. Godunovs theorem, Linear numerical schemes for solving differential equations, having the property of not generating new extrema. Godunovs scheme is a numerical scheme for solving partial differential equations. 1946-1951 - Department of Mechanics and Mathematics, Moscow State University,1951 - Diploma, Moscow State University. 1954 - Candidate of Physical and Mathematical Sciences,1965 - Doctor of Physical and Mathematical Sciences. 1976 - Corresponding member of the USSR Academy of Sciences,1994 - Member of the Russian Academy of Sciences. 1997 - Honorary professor of the University of Michigan, krylov Prize of the USSR Academy of Sciences. 1993 - M. A. Lavrentiev Prize of the Russian Academy of Sciences, total variation diminishing Upwind scheme Godunov, Sergei K. Ph. D. Dissertation, Difference Methods for Shock Waves, Moscow State University. A Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, sbornik,47, 271-306, translated US Joint Publ. Service, JPRS7225 Nov.29,1960, Godunov, Sergei K. and Romenskii, Evgenii I. Elements of Continuum Mechanics and Conservation Laws, Springer, ISBN 0-306-47735-1, Numerical Computation of Internal and External Flows, vol 2, Wiley. Sergei K. Godunov at the Mathematics Genealogy Project Godunovs Personal Web Page Sobolev Institute of Mathematics
19.
Stanislaw Ulam
–
Stanisław Marcin Ulam was a Polish-American mathematician. In pure and applied mathematics, he proved some theorems and proposed several conjectures, born into a wealthy Polish Jewish family, Ulam studied mathematics at the Lwów Polytechnic Institute, where he earned his PhD in 1933 under the supervision of Kazimierz Kuratowski. In 1935, John von Neumann, whom Ulam had met in Warsaw, invited him to come to the Institute for Advanced Study in Princeton, New Jersey, for a few months. From 1936 to 1939, he spent summers in Poland and academic years at Harvard University in Cambridge, Massachusetts, on 20 August 1939, he sailed for America for the last time with his 17-year-old brother Adam Ulam. He became an assistant professor at the University of Wisconsin–Madison in 1940, in October 1943, he received an invitation from Hans Bethe to join the Manhattan Project at the secret Los Alamos Laboratory in New Mexico. There, he worked on the calculations to predict the behavior of the explosive lenses that were needed by an implosion-type weapon. He was assigned to Edward Tellers group, where he worked on Tellers Super bomb for Teller, after the war he left to become an associate professor at the University of Southern California, but returned to Los Alamos in 1946 to work on thermonuclear weapons. With the aid of a cadre of female computers, including his wife Françoise Aron Ulam, in January 1951, Ulam and Teller came up with the Teller–Ulam design, which is the basis for all thermonuclear weapons. With Fermi and John Pasta, Ulam studied the Fermi–Pasta–Ulam problem, Ulam was born in Lemberg, Galicia, on 13 April 1909. At this time, Galicia was in the Kingdom of Galicia and Lodomeria of the Austro-Hungarian Empire, in 1918, it became part of the newly restored Poland, the Second Polish Republic, and the city took its Polish name again, Lwów. The Ulams were a wealthy Polish Jewish family of bankers, industrialists, Ulams immediate family was well-to-do but hardly rich. His father, Józef Ulam, was born in Lwów and was a lawyer and his uncle, Michał Ulam, was an architect, building contractor, and lumber industrialist. From 1916 until 1918, Józefs family lived temporarily in Vienna, after they returned, Lwów became the epicenter of the Polish–Ukrainian War, during which the city experienced a Ukrainian siege. In 1919, Ulam entered Lwów Gymnasium Nr, VII, from which he graduated in 1927. He then studied mathematics at the Lwów Polytechnic Institute, under the supervision of Kazimierz Kuratowski, he received his Master of Arts degree in 1932, and became a Doctor of Science in 1933. At the age of 20, in 1929, he published his first paper Concerning Function of Sets in the journal Fundamenta Mathematicae. From 1931 until 1935, he traveled to and studied in Wilno, Vienna, Zurich, Paris, and Cambridge, England, along with Stanisław Mazur, Mark Kac, Włodzimierz Stożek, Kuratowski, and others, Ulam was a member of the Lwów School of Mathematics. Its founders were Hugo Steinhaus and Stefan Banach, who were professors at the University of Lwów, mathematicians of this school met for long hours at the Scottish Café, where the problems they discussed were collected in the Scottish Book, a thick notebook provided by Banachs wife
20.
John von Neumann
–
John von Neumann was a Hungarian-American mathematician, physicist, inventor, computer scientist, and polymath. He made major contributions to a number of fields, including mathematics, physics, economics, computing, and statistics. He published over 150 papers in his life, about 60 in pure mathematics,20 in physics, and 60 in applied mathematics and his last work, an unfinished manuscript written while in the hospital, was later published in book form as The Computer and the Brain. His analysis of the structure of self-replication preceded the discovery of the structure of DNA, also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935–1939, on the ergodic theorem, Princeton, 1931–1932. During World War II he worked on the Manhattan Project, developing the mathematical models behind the lenses used in the implosion-type nuclear weapon. After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, along with theoretical physicist Edward Teller, mathematician Stanislaw Ulam, and others, he worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. Von Neumann was born Neumann János Lajos to a wealthy, acculturated, Von Neumanns place of birth was Budapest in the Kingdom of Hungary which was then part of the Austro-Hungarian Empire. He was the eldest of three children and he had two younger brothers, Michael, born in 1907, and Nicholas, who was born in 1911. His father, Neumann Miksa was a banker, who held a doctorate in law and he had moved to Budapest from Pécs at the end of the 1880s. Miksas father and grandfather were both born in Ond, Zemplén County, northern Hungary, johns mother was Kann Margit, her parents were Jakab Kann and Katalin Meisels. Three generations of the Kann family lived in apartments above the Kann-Heller offices in Budapest. In 1913, his father was elevated to the nobility for his service to the Austro-Hungarian Empire by Emperor Franz Joseph, the Neumann family thus acquired the hereditary appellation Margittai, meaning of Marghita. The family had no connection with the town, the appellation was chosen in reference to Margaret, Neumann János became Margittai Neumann János, which he later changed to the German Johann von Neumann. Von Neumann was a child prodigy, as a 6 year old, he could multiply and divide two 8-digit numbers in his head, and could converse in Ancient Greek. When he once caught his mother staring aimlessly, the 6 year old von Neumann asked her, formal schooling did not start in Hungary until the age of ten. Instead, governesses taught von Neumann, his brothers and his cousins, Max believed that knowledge of languages other than Hungarian was essential, so the children were tutored in English, French, German and Italian. A copy was contained in a private library Max purchased, One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor. Von Neumann entered the Lutheran Fasori Evangelikus Gimnázium in 1911 and this was one of the best schools in Budapest, part of a brilliant education system designed for the elite
21.
Boris Galerkin
–
Boris Grigoryevich Galerkin, born in Polotsk, Vitebsk Governorate, Russian Empire, was a Jewish Soviet mathematician and an engineer. Galerkin was born on March 41871 in Polotsk, Vitebsk Governorate, Russian Empire, now part of Belarus, to Girsh-Shleym Galerkin and Perla Basia Galerkina. His parents owned a house in the town, but the homecraft they made did not bring money, so at the age of 12. He had finished school in Polotsk, but still needed the exams from a year which granted him the right to continue education at a higher level. He passed those in Minsk in 1893, as an external student, the same year he was enrolled at the St. Petersburg Technological Institute, at the mechanics department. Due to the lack of funds Boris Grigoryevich had to combine studying at the institute with working as a draftsman, in some point of his life, he married Revekka Treivas, a second niece. They did not have any children, like many other students and technologists, he was involved in political activities, and joined the social-democratic group. In 1899, the year of graduating from the institute, he became a member of the Russian Social-Democratic Party and this provides a plausible explanation for his frequent job changes. From the end of 1903 he was an engineer on the construction of the China Far East Railway, half a year later he became the head at the North mechanical. He participated in organizing the Union of Engineers in St. Petersburg and, in 1906, Boris Grigoryevich became a member of the Social-Democratic Partys St. Petersburg Committee and did not work anywhere else. In prison, known as Kresty, Boris Grigoryevich lost interest to revolutionary activities and devoted himself to science, prison conditions of that time were giving such opportunities. And what is more, in his work-book it is written that Boris Grigoryevich worked as an engineer at designing and constructing the power plant from 1907. This fact was not explained, and Boris Grigoryevich did not like to others about his revolutionary youth. Later, in Soviet questionnaires he would not give clear answers on the persistent questions about membership in different parties. Of course, he was familiar with the fate of old Party members, Galerkins life could become the price if this fact became known to the public. Same year his first scientific work was published by the institutes Transactions, the article was titled A theory of longitudinal curving and an experience of longitudinal curving theory application to many-storied frames, frames with rigid junctions and frame systems. The length of the title was indicative of the length of the work itself,130 pages and it was written in the Kresty prison. In the summer of 1909 Boris Grigoryevich had a trip abroad to see constructions, during the next four years, i. e. before World War I, he and many other institute staff visited Europe to stimulate their scientific interests
22.
Edward Norton Lorenz
–
Edward Norton Lorenz was an American mathematician, meteorologist, and a pioneer of chaos theory, along with Mary Cartwright. He introduced the strange attractor notion and coined the term butterfly effect, Lorenz was born in West Hartford, Connecticut. He studied mathematics at both Dartmouth College in New Hampshire and Harvard University in Cambridge, Massachusetts, from 1942 until 1946, he served as a meteorologist for the United States Army Air Corps. After his return from World War II, he decided to study meteorology, Lorenz earned two degrees in the area from the Massachusetts Institute of Technology where he later was a professor for many years. He was a Professor Emeritus at MIT from 1987 until his death, during the 1950s, Lorenz became skeptical of the appropriateness of the linear statistical models in meteorology, as most atmospheric phenomena involved in weather forecasting are non-linear. His work on the topic culminated in the publication of his 1963 paper Deterministic Nonperiodic Flow in Journal of the Atmospheric Sciences, and with it and his description of the butterfly effect followed in 1969. He was awarded the Kyoto Prize for basic sciences, in the field of earth and planetary sciences, in 1991, the Buys Ballot Award in 2004, in his later years, Lorenz lived in Cambridge, Massachusetts. He was an outdoorsman, who enjoyed hiking, climbing. He kept up with these pursuits until very late in his life, according to his daughter, Cheryl Lorenz, Lorenz had finished a paper a week ago with a colleague. On April 16,2008, Lorenz died at his home in Cambridge at the age of 90,1969 Carl-Gustaf Rossby Research Medal, American Meteorological Society. 1973 Symons Gold Medal, Royal Meteorological Society,1975 Fellow, National Academy of Sciences. 1981 Member, Norwegian Academy of Science and Letters,1983 Crafoord Prize, Royal Swedish Academy of Sciences. 1984 Honorary Member, Royal Meteorological Society,2004 Lomonosov Gold Medal Lorenz built a mathematical model of the way air moves around in the atmosphere. As Lorenz studied weather patterns he began to realize that the weather patterns did not always behave as predicted, minute variations in the initial values of variables in his twelve-variable computer weather model would result in grossly divergent weather patterns. This sensitive dependence on conditions, which came to be known as the butterfly effect. Lorenz published several books and articles, a selection,1955 Available potential energy and the maintenance of the general circulation. 1967 The nature and theory of the circulation of atmosphere. No.218 Three approaches to atmospheric predictability, bulletin of the American Meteorological Society
23.
Kenneth G. Wilson
–
Kenneth Geddes Wilson was an American theoretical physicist and a pioneer in leveraging computers for studying particle physics. He was awarded the 1982 Nobel Prize in Physics for his work on phase transitions—illuminating the subtle essence of phenomena like melting ice and it was embodied in his fundamental work on the renormalization group. His mother also trained as a physicist and he attended several schools, including Magdalen College School, Oxford, England, ending up at the George School in eastern Pennsylvania. He went on to Harvard College at age 16, majoring in Mathematics and and he was also a star on the athletics track, representing Harvard in the Mile. During his summer holidays he worked at the Woods Hole Oceanographic Institution and he earned his PhD from Caltech in 1961, studying under Murray Gell-Mann. He did post-doc work at Harvard and CERN and he joined Cornell University in 1963 in the Department of Physics as a junior faculty member, becoming a full professor in 1970. He also did research at SLAC during this period, in 1974, he became the James A. Weeks Professor of Physics at Cornell, in 1982 he was awarded the Nobel Prize in Physics for his work on critical phenomena using the renormalization group. He was a co-winner of the Wolf Prize in physics in 1980, together with Michael E. Fisher and his other awards include the A. C. Eringen Medal, the Franklin Medal, the Boltzmann Medal, and the Dannie Heinemann Prize. He was elected to the National Academy of Science and the American Academy of Arts and Science, in 1988, Dr. Wilson joined the faculty at The Ohio State University, moved to Gray, Maine in 1995. He continued his association with Ohio State University until he retired in 2008, prior to his death, he was actively involved in research on physics education and was an early proponent of active involvement of K-12 students in science and math. Some of his PhD students include H. R. Krishnamurthy, Roman Jackiw, Michael Peskin, Serge Rudaz, Paul Ginsparg, and Steven R. White. Wilsons brother David is also a Professor at Cornell in the department of Molecular Biology and Genetics and he died at the age of 77 in Saco, Maine on June 15,2013. He was respectfully remembered by his colleagues, wilsons work in physics involved formulation of a comprehensive theory of scaling, how fundamental properties and forces of a system vary depending on the scale over which they are measured. This provided profound insights into the field of critical phenomena and phase transitions in statistical physics enabling exact calculations, one example of an important problem in solid-state physics he solved using renormalization is in quantitatively describing the Kondo effect. On such a lattice, he further shed light on chiral symmetry, kenneth Geddes Wilson, 1936-2013, An Appreciation
24.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
25.
Experimental physics
–
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from experiments and observations, such as the Cavendish experiment, to more complicated ones. It is often put in contrast with theoretical physics, which is concerned with predicting and explaining the physical behaviour of nature than the acquisition of knowledge about it. Although experimental and theoretical physics are concerned with different aspects of nature, theoretical physics can also offer insight on what data is needed in order to gain a better understanding of the universe, and on what experiments to devise in order to obtain it. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newtons laws of motion. In Galileos Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship and how that ships cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate a form of the conservation of momentum. Experimental physics is considered to have reached a point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton. Both theories agreed well with experiment, the Principia also included several theories in fluid dynamics. From the late 17th century onward, thermodynamics was developed by physicist and chemist Boyle, Young, in 1733, Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Thompson demonstrated the conversion of work into heat. Ludwig Boltzmann, in the century, is responsible for the modern form of statistical mechanics. Besides classical mechanics and thermodynamics, another field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by such as Robert Boyle, Stephen Gray. These observations also established our basic understanding of electrical charge and current, by 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom. It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields, in 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwells equations also predicted correctly that light is an electromagnetic wave, starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries
26.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
27.
Approximation error
–
The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because the measurement of the data is not precise due to the instruments, or approximations are used instead of the real data. In the mathematical field of analysis, the numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm. One commonly distinguishes between the error and the absolute error. Given some value v and its approximation vapprox, the error is ϵ = | v − v approx |. In words, the error is the magnitude of the difference between the exact value and the approximation. The relative error is the absolute error divided by the magnitude of the exact value, the percent error is the relative error expressed in terms of per 100. These definitions can be extended to the case when v and v approx are n-dimensional vectors, by replacing the absolute value with an n-norm. As an example, if the value is 50 and the approximation is 49.9, then the absolute error is 0.1. Another example would be if, in measuring a 6mL beaker, the correct reading being 6mL, this means the percent error in that particular situation is, rounded,16. 7%.003 and in the second it is only 0.000003. There are two features of relative error that should be kept in mind, firstly, relative error is undefined when the true value is zero as it appears in the denominator. Secondly, relative error only makes sense when measured on a ratio scale, otherwise it would be sensitive to the measurement units. For example, when an error in a temperature measurement given in Celsius scale is 1 °C, and the true value is 2 °C, the relative error is 0.5. Celsius temperature is measured on a scale, whereas the Kelvin scale has a true zero. In most indicating instruments, the accuracy is guaranteed to a percentage of full-scale reading. The limits of deviations from the specified values are known as limiting errors or guarantee errors
28.
Wave function
–
A wave function in quantum physics is a description of the quantum state of a system. The wave function is a probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a function are the Greek letters ψ or Ψ. The wave function is a function of the degrees of freedom corresponding to some set of commuting observables. Once such a representation is chosen, the function can be derived from the quantum state. For a given system, the choice of which commuting degrees of freedom to use is not unique, some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom. Other discrete variables can also be included, such as isospin and these values are often displayed in a column matrix. According to the principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions. The Schrödinger equation determines how wave functions evolve over time, a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name wave function, and gives rise to wave–particle duality, however, the wave function in quantum mechanics describes a kind of physical phenomenon, still open to different interpretations, which fundamentally differs from that of classic mechanical waves. The integral of this quantity, over all the degrees of freedom. This general requirement a wave function must satisfy is called the normalization condition, since the wave function is complex valued, only its relative phase and relative magnitude can be measured. In 1905 Einstein postulated the proportionality between the frequency of a photon and its energy, E = hf, and in 1916 the corresponding relation between photon momentum and wavelength, λ = h/p, the equations represent wave–particle duality for both massless and massive particles. In the 1920s and 1930s, quantum mechanics was developed using calculus and those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing wave mechanics. Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, Schrödinger subsequently showed that the two approaches were equivalent. However, no one was clear on how to interpret it, at first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the scattering of a wave packet representing a particle off a target. While a scattered particle may scatter in any direction, it not break up
29.
Electric field
–
An electric field is a vector field that associates to each point in space the Coulomb force that would be experienced per unit of electric charge, by an infinitesimal test charge at that point. Electric fields are created by electric charges and can be induced by time-varying magnetic fields, the electric field combines with the magnetic field to form the electromagnetic field. The electric field, E, at a point is defined as the force, F. A particle of charge q would be subject to a force F = q E and its SI units are newtons per coulomb or, equivalently, volts per metre, which in terms of SI base units are kg⋅m⋅s−3⋅A−1. Electric fields are caused by electric charges or varying magnetic fields, in the special case of a steady state, the Maxwell-Faraday inductive effect disappears. The resulting two equations, taken together, are equivalent to Coulombs law, written as E =14 π ε0 ∫ d r ′ ρ r − r ′ | r − r ′ |3 for a charge density ρ. Notice that ε0, the permittivity of vacuum, must be substituted if charges are considered in non-empty media, the equations of electromagnetism are best described in a continuous description. A charge q located at r 0 can be described mathematically as a charge density ρ = q δ, conversely, a charge distribution can be approximated by many small point charges. Electric fields satisfy the principle, because Maxwells equations are linear. This principle is useful to calculate the field created by point charges. Q n are stationary in space at r 1, r 2, in that case, Coulombs law fully describes the field. If a system is static, such that magnetic fields are not time-varying, then by Faradays law, in this case, one can define an electric potential, that is, a function Φ such that E = − ∇ Φ. This is analogous to the gravitational potential, Coulombs law, which describes the interaction of electric charges, F = q = q E is similar to Newtons law of universal gravitation, F = m = m g. This suggests similarities between the electric field E and the gravitational field g, or their associated potentials, mass is sometimes called gravitational charge because of that similarity. Electrostatic and gravitational forces both are central, conservative and obey an inverse-square law, a uniform field is one in which the electric field is constant at every point. It can be approximated by placing two conducting plates parallel to other and maintaining a voltage between them, it is only an approximation because of boundary effects. Assuming infinite planes, the magnitude of the electric field E is, electrodynamic fields are E-fields which do change with time, for instance when charges are in motion. The electric field cannot be described independently of the field in that case
30.
Stark effect
–
The Stark effect is the shifting and splitting of spectral lines of atoms and molecules due to presence of an external electric field. The amount of splitting or shifting is called the Stark splitting or Stark shift, in general, one distinguishes first- and second-order Stark effects. The first-order effect is linear in the electric field, while the second-order effect is quadratic in the field. The Stark effect is responsible for the broadening of spectral lines by charged particles. When the split/shifted lines appear in absorption, the effect is called the inverse Stark effect, the Stark effect is the electric analogue of the Zeeman effect where a spectral line is split into several components due to the presence of a magnetic field. The Stark effect can be explained with fully quantum-mechanical approaches, the effect is named after Johannes Stark, who discovered it in 1913. It was independently discovered in the year by the Italian physicist Antonino Lo Surdo. The discovery of this effect contributed importantly to the development of quantum theory, by using experimental indices of refraction he gave an estimate of the Stark splittings. This estimate was a few orders of magnitude too low, not deterred by this prediction, Stark undertook measurements on excited states of the hydrogen atom and succeeded in observing splittings. By the use of the Bohr–Sommerfeld quantum theory, Paul Epstein and Karl Schwarzschild were independently able to derive equations for the linear, four years later, Hendrik Kramers derived formulas for intensities of spectral transitions. Kramers also included the effect of structure, which includes corrections for relativistic kinetic energy. The first quantum mechanical treatment was by Wolfgang Pauli, erwin Schrödinger discussed at length the Stark effect in his third paper on quantum theory, once in the manner of the 1916 work of Epstein and once by his perturbation approach. Finally, Epstein reconsidered the linear and quadratic Stark effect from the point of view of the new quantum theory and he derived equations for the line intensities which were a decided improvement over Kramers results obtained by the old quantum theory. While first-order perturbation effects for the Stark effect in hydrogen are in agreement for the Bohr–Sommerfeld model, measurements of the Stark effect under high field strengths confirmed the correctness of the quantum theory over the Bohr model. An electric field pointing from left to right, for example, tends to pull nuclei to the right and electrons to the left. Other things equal, the effect of the field is greater for outer electron shells, because the electron is more distant from the nucleus, so it travels farther left. The Stark effect can lead to splitting of energy levels. For example, in the Bohr model, an electron has the same whether it is in the 2s state or any of the 2p states