1.
Computational physics
–
Computational physics is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics is now a subset of computational science. In physics, different theories based on mathematical models provide very precise predictions on how systems behave. Unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution is too complicated. In such cases, numerical approximations are required. There is a debate about the status of computation within the scientific method. While computers can be used in experiments for the recording of data, this clearly does not constitute a computational approach. Physics problems are in general very difficult to solve exactly. This is due to several reasons: lack of algebraic and/or analytic solubility, chaos. On the more advanced side, mathematical theory is also sometimes used. In addition, the computational cost and complexity for many-body problems tend to grow quickly. A macroscopic system typically has a size of the order of 23 constituent particles, so it is somewhat of a problem. For classical N-body it is of order N-squared. Because computational physics uses a broad class of problems, it is generally divided amongst the mathematical problems it numerically solves, or the methods it applies.
Computational physics
–
Computational physics
2.
Numerical analysis
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is extremely important, in construction. Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of 2, modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors. Before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead. These same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically. Hedge funds use tools from all fields of numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis.
Numerical analysis
–
Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/60 2 + 10/60 3 = 1.41421296...
Numerical analysis
–
Direct method
Numerical analysis
3.
Computer simulation
–
Computer simulations reproduce the behavior of a system using a model. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions. The scale of events being simulated by computer simulations has far exceeded anything possible using mathematical modeling. Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification. A model is the equations used to capture the behavior of the system being modeled. By contrast, simulation is the actual running of the program that contains these algorithms. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model", then either "run the model" or equivalently "run a simulation". It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used to, or substitute for, modeling systems for which simple form analytic solutions are not possible. The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers, while others might require terabytes of information. Because of this variety, because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula.
Computer simulation
–
Computer simulation of the process of osmosis
Computer simulation
–
A 48-hour computer simulation of Typhoon Mawar using the Weather Research and Forecasting model
4.
Scientific visualization
–
Scientific visualization is an interdisciplinary branch of science. It is also considered a subset of a branch of computer science. The purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand, glean insight from their data. One of the earliest examples of scientific visualisation was Maxwell's thermodynamic surface, sculpted in clay in 1874 by James Clerk Maxwell. This prefigured scientific visualization techniques that use computer graphics. Scientific visualization using computer graphics gained as graphics matured. Primary applications also measured data. The primary methods for visualizing two-dimensional scalar fields are drawing contour lines. 2D vector fields line integral convolution methods. For 3D scalar fields the primary methods are isosurfaces. Methods for visualizing vector fields include glyphs such as arrows, streamlines and streaklines, particle tracing, topological methods. Later, visualization techniques such as hyperstreamlines were developed to visualize 3D tensor fields. Computer animation is the art, science of creating moving images via the use of computers. Sometimes the target is another medium, such as film. It is also referred to as CGI, especially when used in films.
Scientific visualization
–
A scientific visualization of a simulation of a Rayleigh–Taylor instability caused by two mixing fluids.
Scientific visualization
–
Surface rendering of Arabidopsis thaliana pollen grains with confocal microscope.
Scientific visualization
–
Scientific visualization of Fluid Flow: Surface waves in water
Scientific visualization
–
Chemical imaging of a simultaneous release of SF 6 and NH 3.
5.
Morse/Long-range potential
–
Owing to the simplicity of the Morse potential, it is not used in modern spectroscopy. The MLR potential is a modern version of the Morse potential which has the correct long-range form of the potential naturally built into it. The accuracy of these predictions was much better than the most sophisticated initio techniques at the time. It has been said that this work by Le Roy et al. was a "landmark in diatomic analysis". The MLR potential is based on the classic Morse potential, first introduced by Philip M. Morse. A primitive version of the MLR potential was first introduced by professor Robert J. Le Roy and colleagues for a study on N2. This primitive form was used on Ca2, KLi and MgH, before the more modern version was introduced by Le Roy, Dattani, Coxon. It is clear to see that: lim r → ∞ = y r x, so lim r → ∞ β = β ∞. More sophisticated versions are used for polyatomic molecules. Examples of molecules for which the MLR has been used to represent initio points are KLi, KBe. Dilithium Morse potential Lennard-Jones potential
Morse/Long-range potential
–
Computational physics
6.
Lennard-Jones potential
–
The Lennard-Jones potential is a mathematically simple model that approximates the interaction between a pair of neutral atoms or molecules. A form of this interatomic potential was first proposed in 1924 by John Lennard-Jones. At rm, the potential function has the ε. The distances are related as rm = 21/6σ ≈ 1.122σ. These parameters can be fitted to reproduce accurate chemistry calculations. Due to its computational simplicity, the Lennard-Jones potential is used extensively in computer simulations even though more accurate potentials exist. Differentiating the L-J potential to ` r' gives an expression for the inter-molecular force between 2 molecules. This inter-molecular force may be attractive or repulsive, depending on the value of'r'. When'r' is very small, the 2 molecules repel each other. Whereas the functional form of the attractive term has a clear physical justification, the repulsive term has no theoretical justification. The L-J potential is a relatively good approximation. Due to its simplicity, it is often used to model overlap interactions in molecular models. It is a good approximation at short distances for neutral atoms and molecules. The lowest energy arrangement of an infinite number of atoms described by a Lennard-Jones potential is a hexagonal close-packing. On raising temperature, arrangement becomes then liquid.
Lennard-Jones potential
–
Computational physics
Lennard-Jones potential
–
A graph of strength versus distance for the 12-6 Lennard-Jones potential.
7.
Yukawa potential
–
The potential is monotone increasing in r and it is negative, implying the force is attractive. In the SI system, the unit of the Yukawa potential is. The Coulomb potential of electromagnetism is an example of a Yukawa potential with e−kmr equal to 1 everywhere. This can be interpreted as saying that the m is equal to 0. In interactions between a field, the constant g is equal to the gauge coupling constant between those fields. In the case of the nuclear force, the fermions would be a neutron. Since the mediator is massive the corresponding force has a certain range, inversely proportional to the mass of the mediator m. If the mass is zero, then the Yukawa potential equals a Coulomb potential, the range is said to be infinite. In fact, we have: m = 0 ⇒ e − m r = e 0 = 1. A comparison of the long strength for Yukawa and Coulomb is shown in Figure 2. It can be seen that the Coulomb potential has effect over a greater distance whereas the Yukawa potential approaches zero rather quickly. However, any Yukawa potential or Coulomb potential are non-zero for any large r. The easiest way to understand that the Yukawa potential is associated with a massive field is by examining its Fourier transform. In this form, the fraction 4 / is seen to be Green's function of the Klein -- Gordon equation. The Yukawa potential can be derived as the lowest order amplitude of the interaction of a pair of fermions.
Yukawa potential
–
Figure 1: A comparison of Yukawa potentials where g=1 and with various values for m.
8.
Morse potential
–
The Morsediagraf, named after physicist Philip M. Morse, is a convenient interatomic interaction model for the potential energy of a diatomic molecule. It also accounts for the non-zero transition probability for overtone and combination bands. The Morse potential can also be used to model one interactions such as the interaction between a surface. Due to its simplicity, it is not used in modern spectroscopy. However, its mathematical form inspired the MLR potential, the most popular potential function used for fitting spectroscopic data. The energy of the bond can be calculated by subtracting the zero point energy E from the depth of the well. This form equals − D e at its minimum, i.e. r = r e. It clearly shows that the Morse potential is the combination of a long-range attractive term, analogous to the Lennard-Jones potential. Like the quantum oscillator, the energies and eigenstates of the Morse potential can be found using operator methods. One approach involves applying the method to the Hamiltonian. Then, the Schrödinger equation takes the simple form: Ψ n = ε n Ψ n, V = λ 2. Γ n! . Mathematically, the spacing of Morse levels is E − E = h ν 0 2 / 2 D e. This trend matches the anharmonicity found in real molecules.
Morse potential
–
Computational physics
9.
Finite difference method
–
FDMs are thus discretization methods. FDMs are the dominant approach to numerical solutions of partial differential equations. H + f 2! H 2 + ⋯ + f n! H n + n, where n! Rn is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. The error in a method's solution is defined as the difference between the exact analytical solution. To use a finite method to approximate the solution to a problem, one must first discretize the problem's domain. This is usually done by dividing the domain into a uniform grid. Note that this means that finite-difference methods produce sets of numerical approximations to the derivative, often in a "time-stepping" manner. An expression of general interest is the local error of a method. Typically expressed using Big-O notation, local error refers to the error from a single application of a method. The term of a Taylor polynomial is convenient for analyzing the local truncation error. Using the Lagrange form of the remainder for f, R n = f! N + 1, where x 0 < ξ < x 0 + h, the dominant term of the local error can be discovered.
Finite difference method
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
10.
Finite volume method
–
The finite volume method is a method for representing and evaluating partial differential equations in the form of algebraic equations. Similar to finite method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite method, volume integrals in a partial equation that contain a divergence term are converted to surface integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods are conservative. Another advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes. The method is used in computational fluid packages. Here, = f represents the flux or flow of ρ. Conventionally, positive f represents flow to the right while negative f represents flow to the left. We assume that f is well behaved and that we can reverse the order of integration. Also, recall that flow is normal to the unit area of the cell. Where f i ± 1 2 = f. Equation is exact for the volume averages; i.e. no approximations have been made during its derivation. We can also consider the general problem, represented by the following PDE, ∂ u ∂ t + ∇ ⋅ f = 0.
Finite volume method
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
11.
Finite element method
–
The finite element method is a numerical technique for finding approximate solutions to boundary value problems for partial differential equations. It is also referred to as finite element analysis. It subdivides a large problem into smaller, simpler parts that are called finite elements. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function. The global system of equations has known solution techniques, can be calculated from the initial values of the original problem to obtain a numerical answer. To explain the approximation in this process, FEM is commonly introduced as a special case of Galerkin method. In simple terms, it is a procedure that minimizes the error of approximation by fitting trial functions into the PDE. The residual is the error caused by the trial functions, the weight functions are polynomial approximation functions that project the residual. These equation sets are the element equations. They are linear if the underlying PDE is linear, vice versa. This spatial transformation includes appropriate orientation adjustments as applied in relation to the reference coordinate system. The process is often carried out by FEM software using coordinate data generated from the subdomains. FEM is best understood from its practical application, known as finite element analysis. FEA as applied in engineering is a computational tool for performing engineering analysis.
Finite element method
–
Visualization of how a car deforms in an asymmetrical crash using finite element analysis. [1]
Finite element method
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
12.
Riemann solver
–
A Riemann solver is a numerical method used to solve a Riemann problem. They are heavily used in computational magnetohydrodynamics. Modern solvers are able to simulate magnetic fields. For the hydrodynamic case latest research results showed the possibility to avoid the iterations to calculate the exact solution for the Euler equations. As iterative solutions are too costly, especially in Magnetohydrodynamics, some approximations have to be made. The most popular solvers are: Roe used the linearisation of the Jacobian, which he then solves exactly. The description of the HLLE scheme in the book mentioned below is partially wrong. The reader is referred to the original paper. Actually, the HLLE scheme is based in fluids, never published. The HLLC solver was introduced by Toro. They are quite efficient but somewhat more diffusive. In particular, the one called Rotated-RHLL solver, is extremely robust and accurate.
Riemann solver
–
Computational physics
13.
Smoothed-particle hydrodynamics
–
Smoothed-particle hydrodynamics is a computational method used for simulating the dynamics of continuum media, such as solid mechanics and fluid flows. It was Lucy initially for astrophysical problems. It has been used in many fields including astrophysics, ballistics, volcanology, oceanography. The resolution of the method can easily be adjusted with respect to variables such as the density. The smoothed-particle hydrodynamics method works by dividing the fluid into a set of discrete elements, referred to as particles. These particles have a spatial distance, over which their properties are "smoothed" by a function. The contributions of each particle to a property are weighted according to their distance from their density. Mathematically, this is governed by the function. Kernel functions commonly used include the cubic spline. The latter function is exactly zero for particles further away than two smoothing lengths. This has the advantage of saving computational effort by not including the relatively minor contributions from distant particles. Similarly, the spatial derivative of a quantity can be obtained easily by virtue of the linearity of the derivative. ∇ A = ∑ j m j A j ρ j ∇ W. Although the size of the smoothing length can be fixed in both space and time, this does not take advantage of the full power of SPH. In a very dense region where many particles are close together the smoothing length can be made relatively short, yielding high spatial resolution.
Smoothed-particle hydrodynamics
–
Fig. SPH simulation of ocean waves using FLUIDS v.1 (Hoetzlein)
14.
Monte Carlo method
–
Monte Carlo methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. Their essential idea is using randomness to solve problems that might be deterministic in principle. They are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three distinct problem classes: optimization, generating draws from a probability distribution. In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. When the distribution of the variable is parametrized, mathematicians often use a Markov Chain Monte Carlo sampler. The central idea is to design a judicious Markov model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler. In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear equation. In other instances we are given a flow of probability distributions with an increasing level of sampling complexity. These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methodologies these mean particle techniques rely on sequential interacting samples. The mean field reflects the fact that each of the samples interacts with the empirical measures of the process. Monte Carlo methods tend to follow a particular pattern: Define a domain of possible inputs.
Monte Carlo method
–
Computational physics
15.
Monte Carlo integration
–
In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a integral. While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly choose points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals. There are different methods to mean field particle methods. In numerical integration, methods such as the Trapezoidal rule use a deterministic approach. Monte Carlo integration, on the other hand, employs a non-deterministic approach: each realization provides a different outcome. In Monte Carlo, the correct value is within those error bars. This is because the law of large numbers ensures that lim N → ∞ Q N = I. Given the estimation of I from QN, the error bars of QN can be estimated by the sample variance using the unbiased estimate of the variance. V a ≡ σ N 2 = 1 N − 1 ∑ i = 1 N 2. Long as the sequence is bounded, this variance decreases asymptotically to zero as 1/N. The estimation of the error of QN is thus δ Q N ≈ V a r = V σ N N, which decreases as N. This is standard error of the mean multiplied with V. While the naive Monte Carlo works for simple examples, this is not the case in most problems.
Monte Carlo integration
–
An illustration of Monte Carlo integration. In this example, the domain D is the inner circle and the domain E is the square. Because the square's area (4) can be easily calculated, the area of the circle (π*1 2) can be estimated by the ratio (0.8) of the points inside the circle (40) to the total number of points (50), yielding an approximation for the circle's area of 4*0.8 = 3.2 ≈ π*1 2.
16.
N-body simulation
–
In physics and astronomy, an N-body simulation is a simulation of a dynamical system of particles, usually under the influence of physical forces, such as gravity. Direct N-body simulations are used to study the dynamical evolution of star clusters. The ` particles' treated by the simulation may not correspond to physical objects which are particulate in nature. For example, an N-body simulation of a cluster might have a particle per star, so each particle has some physical significance. This quantity must be chosen as a compromise between accuracy and manageable computer requirements. These calculations are used in situations where interactions such as stars or planets, are important to the evolution of the system. The direct N-body simulations were carried out by Sebastian von Hoerner at the Astronomisches Rechen-Institut in Heidelberg, Germany. Regularization is a mathematical trick to remove the singularity in the Newtonian law of gravitation for two particles which approach each arbitrarily close. Sverre Aarseth's codes are used to study the dynamics of star clusters, galactic nuclei. Many simulations are large enough that the effects of general relativity in establishing a Friedmann-Lemaitre-Robertson-Walker cosmology are significant. This is incorporated in a comoving coordinate system, which causes the particles to slow in comoving coordinates. The boundary conditions of these cosmological simulations are usually periodic, so that one edge of the volume matches up with the opposite edge. N-body simulations are simple in principle, because they merely involve integrating the 6N ordinary differential equations defining the particle motions in Newtonian gravity. Therefore, a number of refinements are commonly used. There are two basic approximation schemes to decrease the computational time for such simulations.
N-body simulation
17.
Molecular dynamics
–
Molecular dynamics is a computer simulation method for studying the physical movements of atoms and molecules, is thus a type of N-body simulation. The molecules are allowed to interact for a fixed period of time, giving a view of the dynamical evolution of the system. In 1957, Alder and Wainwright used an IBM 704 computer to simulate perfectly elastic collisions between hard spheres. In 1960, Gibson et al. simulated damage of solid copper by using a Born-Mayer type of repulsive interaction along with a cohesive surface force. In 1964, Rahman published landmark simulations of liquid argon that used a Lennard-Jones potential. Calculations such as the coefficient of self-diffusion, compared well with experimental data. Even before it became possible to simulate molecular dynamics with computers, some undertook the hard work of trying it with physical models such as macroscopic spheres. The idea was to arrange them to replicate the properties of a liquid. J.D. Bernal said, in 1962: "... I took a number of rubber balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inches. In physics, MD is used to examine the dynamics of atomic-level phenomena that can not be observed directly, such as ion-subplantation. It is also used to examine the physical properties of nanotechnological devices that can not yet be created. In principle MD can be used by simulating folding of the polypeptide chain from random coil. The results of MD simulations can be tested to experiments that measure molecular dynamics, of which a popular method is nuclear magnetic resonance spectroscopy. Limits of the method are related to the underlying molecular mechanics force fields.
Molecular dynamics
Molecular dynamics
–
Example of a molecular dynamics simulation in a simple system: deposition of a single Cu atom on a Cu (001) surface. Each circle illustrates the position of a single atom; note that the actual atomic interactions used in current simulations are more complex than those of 2-dimensional hard spheres.
18.
Sergei K. Godunov
–
Sergei Konstantinovich Godunov is professor at the Sobolev Institute of Mathematics of the Russian Academy of Sciences in Novosibirsk, Russia. Professor Godunov's most influential work is in the area of numerical mathematics. It has had a major impact on engineering, particularly in the development of methodologies used in Computational Fluid Dynamics and other computational fields. On May 1997 a symposium entitled: Godunov-type numerical methods, was held at the University of Michigan to honour Godunov. These methods are widely used to compute continuum processes dominated by propagation. On the following day, 3 Godunov received an honorary degree from the University of Michigan. Godunov's theorem: Linear numerical schemes for having the property of not generating new extrema, can be at most first-order accurate. Godunov's scheme is a numerical scheme for solving partial differential equations. 1946-1951 - Department of Mechanics and Mathematics, Moscow State University. 1951 - Diploma, Moscow State University. 1954 - Candidate of Physical and Mathematical Sciences. 1965 - Doctor of Physical and Mathematical Sciences. 1976 - Corresponding member of the USSR Academy of Sciences. 1994 - Member of the Russian Academy of Sciences. 1997 - Honorary professor of the University of Michigan.
Sergei K. Godunov
–
Sergei Godunov
19.
Stanislaw Ulam
–
Stanisław Marcin Ulam was a Polish-American mathematician. He participated in America's Manhattan Project, originated the Teller -- Ulam design of thermonuclear weapons, suggested nuclear pulse propulsion. In applied mathematics, he proved some theorems and proposed several conjectures. On 20 he sailed for America for the last time with his 17-year-old brother Adam Ulam. He became an assistant professor at the University of Wisconsin -- a United States citizen in 1941. In October 1943, he received an invitation from Hans Bethe to join the Manhattan Project at the secret Los Alamos Laboratory in New Mexico. There, he worked on the hydrodynamic calculations to predict the behavior of the explosive lenses that were needed by an implosion-type weapon. He was assigned to Edward Teller's group, where he worked for Teller and Enrico Fermi. Including his wife Françoise Aron Ulam, he found that Teller's "Super" design was unworkable. In January 1951, Ulam and Teller came up with the Teller–Ulam design, the basis for all thermonuclear weapons. With Fermi and John Pasta, Ulam studied the Fermi–Pasta–Ulam problem, which became the inspiration for the field of non-linear science. Ulam was born on 13 April 1909. At this time, Galicia was in the Kingdom of the Austro-Hungarian Empire, known to Poles as the Austrian partition. In 1918, it became part of the newly restored Poland, the city took its Polish name again, Lwów. The Ulams were a wealthy Jewish family of bankers, industrialists, other professionals.
Stanislaw Ulam
–
Stanisław Ulam
Stanislaw Ulam
–
The Scottish Café 's building now houses the Universal Bank in Lviv, the present name of Lwów.
Stanislaw Ulam
–
Stan Ulam Holding the FERMIAC
Stanislaw Ulam
–
Ivy Mike, the first full test of the Teller–Ulam design (a staged fusion bomb), with a yield of 10.4 megatons on 1 November 1952
20.
John von Neumann
–
John von Neumann was a Hungarian-American mathematician, physicist, inventor, computer scientist, polymath. Von Neumann made major contributions including mathematics, physics, economics, statistics. An unfinished manuscript written while in the hospital, was later published as The Computer and the Brain. His analysis of the structure of self-replication preceded the discovery of the structure of DNA. Also, my work on various forms of Berlin 1930 and -- 1939; on the ergodic theorem, Princeton, 1931 -- 1932." During World War II he worked on the Manhattan Project, developing the mathematical models behind the explosive lenses used in the implosion-type nuclear weapon. After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, later as one of its commissioners. He was born Neumann János Lajos to a non-observant Jewish family. His Hebrew name was Yonah. Von Neumann's place of birth was Budapest in the Kingdom of Hungary, then part of the Austro-Hungarian Empire. He was the eldest of three children. He had two younger brothers: Michael, born in 1907, Nicholas, born in 1911. His father, Neumann Miksa was a banker, who held a doctorate in law. Von Neumann had moved at the end of the 1880s. Miksa's grandfather were both born in Zemplén County, northern Hungary.
John von Neumann
–
Excerpt from the university calendars for 1928 and 1928–1929 of the Friedrich-Wilhelms-Universität Berlin announcing Neumann's lectures on axiomatic set theory and logics, problems in quantum mechanics and special mathematical functions. Notable colleagues were Georg Feigl, Issai Schur, Erhard Schmidt, Leó Szilárd, Heinz Hopf, Adolf Hammerstein and Ludwig Bieberbach.
John von Neumann
–
John von Neumann in the 1940s
John von Neumann
–
Julian Bigelow, Herman Goldstine, J. Robert Oppenheimer and John von Neumann at the Princeton Institute for Advanced Study.
John von Neumann
–
Von Neumann's gravestone
21.
Boris Galerkin
–
Boris Grigoryevich Galerkin, born in Polotsk, Vitebsk Governorate, Russian Empire, was a Jewish Soviet mathematician and an engineer. He was born on March to Girsh-Shleym Galerkin and Perla Basia Galerkina. Galerkin passed those in Minsk as an external student. He was enrolled at the St. Petersburg Technological Institute, at the mechanics department. Due to the lack of funds Boris Grigoryevich had to combine studying with working as a draftsman and giving private lessons. In some point of his life, Galerkin married a second niece. They did not have any children. Like technologists, Galerkin was involved in political activities, joined the social-democratic group. In the year of graduating from the institute, Galerkin became a member of the Russian Social-Democratic Party. This provides a plausible explanation for his frequent job changes. In 1905 he was arrested for organizing a strike among the engineers. In 1906, Boris Grigoryevich did not work anywhere else. In prison, known as "Kresty", Boris Grigoryevich devoted himself to science and engineering. Prison conditions of that time were giving such opportunities. Boris Grigoryevich did not like to remind others about his revolutionary youth.
Boris Galerkin
–
Boris Galerkin
22.
Edward Norton Lorenz
–
Edward Norton Lorenz was an American mathematician, meteorologist, a pioneer of chaos theory. He coined the term butterfly effect. Lorenz was born in Connecticut. He studied mathematics in Cambridge, Massachusetts. Until 1946 he served as a meteorologist for the United States Army Air Corps. After his return from World War II, he decided to study meteorology. Lorenz earned two degrees from the Massachusetts Institute of Technology where he later was a professor for many years. He was a Professor Emeritus at MIT until his death. During the 1950s, Lorenz became skeptical of the appropriateness of the statistical models in meteorology, as most atmospheric phenomena involved in weather forecasting are non-linear. He states in that paper: Two states differing by imperceptible amounts may eventually evolve into two considerably different states... His description of the butterfly effect followed in 1969. In his later years, Lorenz lived in Cambridge, Massachusetts. He was cross-country skiing. According to Cheryl Lorenz, Lorenz had "finished a paper a week ago with a colleague." On April 2008, Lorenz died at his home in Cambridge at the age of 90, having suffered from cancer.
Edward Norton Lorenz
–
Edward Norton Lorenz
23.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a wide range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, biology. Especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can best be solved by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid problems. An experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille and Gotthilf Hagen. Fluid hydrostatics is the branch of fluid mechanics that studies fluids at rest. Hydrostatics is fundamental to the engineering of equipment for storing, transporting and using fluids. It is also relevant to meteorology, to medicine, many other fields. Fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow -- the science of gases in motion. It has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in traffic crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table. A fluid at rest has no stress.
Fluid mechanics
–
Balance for some integrated fluid quantity in a control volume enclosed by a control surface.
24.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained step-by-step set of operations to be performed. Algorithms perform calculation, data processing, and/or automated reasoning tasks. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. Giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu. It begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five. An informal definition could be "a set of rules that precisely defines a sequence of operations." Which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually. An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. The concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a small set of axioms and rules.
Algorithm
–
Alan Turing's statue at Bletchley Park.
Algorithm
–
Logical NAND algorithm implemented electronically in 7400 chip
25.
Fluid dynamics
–
In physics, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of fluids in motion. It has several subdisciplines itself, including aerodynamics and hydrodynamics. Some of its principles are even used in engineering, where traffic is treated as crowd dynamics. Before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected like hydrodynamic stability, both of which can also be applied to gases. The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of energy. These are modified in general relativity. They are expressed using the Reynolds Transport Theorem. In addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. The fact that the fluid is made up of discrete molecules is ignored. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve. Some of the simplifications allow appropriate fluid dynamics problems to be solved in closed form.
Fluid dynamics
26.
Boundary value problem
–
A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions. Boundary value problems arise in several branches of physics as any physical differential equation will have them. Problems involving the wave equation, such as the determination of normal modes, are often stated as boundary value problems. A large class of important boundary value problems are the Sturm–Liouville problems. The analysis of these problems involves the eigenfunctions of a differential operator. To be useful in applications, a boundary value problem should be well posed. This means that given the input to the problem there exists a unique solution, which depends continuously on the input. Among the earliest boundary value problems to be studied is the Dirichlet problem, of finding the harmonic functions; the solution was given by the Dirichlet's principle. Boundary value problems are similar to initial value problems. Without the boundary conditions, the general solution to this equation is y = A sin + B cos . From the boundary condition y = 0 one obtains 0 = A ⋅ 0 + B ⋅ 1 which implies that B = 0. From the boundary condition y = 2 one finds 2 = A ⋅ 1 and so A = 2. One sees that imposing boundary conditions allowed one to determine a unique solution, which in this case is y = 2 sin . A boundary condition which specifies the value of the function itself is a Dirichlet boundary condition, or first-type boundary condition. A boundary condition which specifies the value of the normal derivative of the function is a Neumann boundary condition, or second-type boundary condition.
Boundary value problem
–
Shows a region where a differential equation is valid and the associated boundary values
27.
Supercomputer
–
A supercomputer is a computer with a high-level computational capacity compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second instead of million instructions per second. As of 2015, there are supercomputers which can perform up to quadrillions of FLOPS. It tops the rankings in the TOP500 list. As of June 2016, China, for the first time, had more computers on the TOP500 list than the United States. However, U.S. built computers held ten of the top 20 positions. Throughout their history, they have been essential in the field of cryptanalysis. These used innovative designs and parallelism to achieve computational peak performance. Cray left CDC in 1972 to form Cray Research. Four years after leaving CDC, it became one of the most successful supercomputers in history. Fluorinert was pumped through it as it operated. It was the world's second fastest after M-13 supercomputer in Moscow. Fujitsu's Numerical Wind supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaFLOPS per processor. The Hitachi SR2201 obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a three-dimensional crossbar network. The Intel Paragon was ranked the fastest in the world in 1993.
Supercomputer
–
IBM 's Blue Gene/P supercomputer at Argonne National Laboratory runs over 250,000 processors using normal data center air conditioning, grouped in 72 racks/cabinets connected by a high-speed optical network
Supercomputer
–
A Cray-1 preserved at the Deutsches Museum
Supercomputer
–
A Blue Gene /L cabinet showing the stacked blades, each holding many processors
Supercomputer
–
An IBM HS20 blade
28.
Transonic
–
This condition depends not only on the speed of the craft, but also on the temperature of the airflow in the vehicle's local environment. A significant fraction is not. Most modern powered aircraft are engineered to operate at transonic air speeds. It is the fuel costs of the drag that typically limits the airspeed. Attempts to reduce drag can be seen on all high-speed aircraft. Another common form is a wasp-waist fuselage as a side effect of the Whitcomb area rule. Severe instability can occur at transonic speeds. Shock waves move through the air at the speed of sound. Transonic speeds can also occur at the tips of rotor blades of helicopters and aircraft. This may lead to accidents if it occurs. It is one of the limiting factors of the forward speeds of helicopters. At transonic speeds supersonic expansion fans form intense low-temperature areas at various points around an aircraft. If the temperature drops below the point a visible cloud will form. These clouds remain with the aircraft as it travels. It is not necessary for the aircraft as a whole to reach supersonic speeds for these clouds to form.
Transonic
–
Aerodynamic condensation evidences of supersonic expansion fans around a transonic F/A-18
Transonic
–
Shock waves may appear as weak optical disturbances above airliners with supercritical wings
29.
Turbulence
–
Turbulence or turbulent flow is a flow regime in fluid dynamics characterized by chaotic changes in pressure and flow velocity. It is to a laminar regime, which occurs when a fluid flows in parallel layers, with no disruption between those layers. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the dampening effect of the fluid's viscosity. For this turbulence is easier to more difficult in highly viscous fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This would increase the energy needed to pump fluid through a pipe, for instance. However this effect can also be exploited on aircraft, which deliberately reduce lift. However, turbulence has long resisted detailed physical analysis, the interactions within turbulence creates a very complex situation. Richard Feynman has described turbulence as the most important unsolved problem of classical physics. Smoke rising from a cigarette is mostly turbulent flow. However, for the first few centimeters the flow is laminar. The plume becomes turbulent as its Reynolds number increases, due to characteristic length increasing. Flow over a golf ball. If the ball were smooth, the boundary flow over the front of the sphere would be laminar at typical conditions. To prevent this from happening, the surface is dimpled to perturb the boundary layer and promote transition to turbulence.
Turbulence
–
Flow visualization of a turbulent jet, made by laser-induced fluorescence. The jet exhibits a wide range of length scales, an important characteristic of turbulent flows.
Turbulence
–
Laminar and turbulent water flow over the hull of a submarine
Turbulence
–
Turbulence in the tip vortex from an airplane wing
30.
Wind tunnel
–
A wind tunnel is a tool used in aerodynamic research to study the effects of air moving past solid objects. A tunnel consists of a tubular passage with the object under test mounted in the middle. Air is made to move past the object by other means. The object, often called a wind tunnel model, is instrumented with suitable sensors to measure aerodynamic forces, pressure distribution, or other aerodynamic-related characteristics. In that way a stationary observer could measure the aerodynamic forces being imposed on it. The development of wind tunnels accompanied the development of the airplane. Large wind tunnels were built during the Second World War. Wind testing was considered of strategic importance during the Cold War development of supersonic aircraft and missiles. The advances in computational fluid dynamics modelling on high speed digital computers has reduced the demand for wind testing. However, wind tunnels are used to verify CFD predictions. Pressures are measured in several ways in wind tunnels. Air velocity through the section is determined by Bernoulli's principle. Measurement of the dynamic pressure, the temperature rise in the airflow. The direction of airflow around a model can be determined by tufts of yarn attached to the aerodynamic surfaces. The direction of airflow approaching a surface can be visualized by mounting threads in the airflow ahead of and aft of the model.
Wind tunnel
–
NASA wind tunnel with the model of a plane.
Wind tunnel
–
A model Cessna with helium-filled bubbles showing pathlines of the wingtip vortices.
Wind tunnel
–
Replica of the Wright brothers' wind tunnel.
Wind tunnel
–
Eiffel's wind tunnels in the Auteuil laboratory
31.
Flight test
–
Therefore, the duration of a particular flight program can vary from a few weeks to many years. There are typically two categories of test programs -- commercial and military. Commercial testing is conducted to certify that the aircraft meets all applicable safety and performance requirements of the government certifying agency. These civil agencies are concerned with the aircraft's safety and that the pilot's manual accurately reports the aircraft's performance. The market will determine the aircraft’s suitability to operators. Military programs differ from commercial in that the government contracts with the manufacturer to design and build an aircraft to meet specific mission capabilities. In this case, the government has a direct stake in the aircraft's ability to perform the mission. Since the government is funding the program, it is more involved in the design and testing from early-on. Often military test engineers are integrated as part of the manufacturer's flight test team, even before first flight. The final phase of the military aircraft test is the Operational Test. OT is conducted by a government-only team with the dictate to certify that the aircraft is suitable and effective to carry out the intended mission. Testing of military aircraft is often conducted at military flight test facilities. The US Navy tests aircraft at the US Air Force at Edwards Air Force Base. The U.S. Naval Test Pilot School are the programs designed to teach military test personnel. In the UK, most military flight testing is conducted by three organizations, the RAF, QinetiQ.
Flight test
–
Static pressure probe on the nose of a Sukhoi Superjet 100 prototype
Flight test
Flight test
–
Flight test pressure probes and water tanks in 747-8I prototype
Flight test
–
Static pressure probe rig aboard Boeing 747-8I prototype; A long tube, rolled up inside the barrel, is connected to a probe which can be deployed far behind the tail of the aircraft
32.
Space Shuttle
–
The first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982. Five Shuttle systems were used on a total of 135 missions from 1981 to 2011, launched from the Kennedy Space Center in Florida. The Shuttle fleet's total time was 1322 days, 19 hours, 23 seconds. Shuttle components included the expendable external tank containing liquid hydrogen and liquid oxygen. At the conclusion of the mission, the orbiter fired its OMS to de-orbit and re-enter the atmosphere. After landing at Edwards, the orbiter was flown back on a specially modified Boeing 747. Enterprise, had no orbital capability. Four fully operational orbiters were initially built: Columbia, Challenger, Discovery, Atlantis. Of these, two were lost in mission accidents: Challenger in 1986 and Columbia in 2003, with a total of fourteen astronauts killed. A fifth operational orbiter, Endeavour, was built in 1991 to replace Challenger. The Space Shuttle was retired on July 21, 2011. Nixon's post-Apollo NASA budgeting withdrew support of all system components except the Shuttle, to which NASA applied the STS name. The vehicle consisted with reusable solid booster rockets. The first of four orbital test flights occurred in 1981, leading beginning in 1982, all launched from Florida. The system was retired with Atlantis making the final launch of the three-decade Shuttle program on July 8, 2011.
Space Shuttle
–
Discovery lifts off at the start of STS-120.
Space Shuttle
–
STS-129 ready for launch
Space Shuttle
–
President Nixon (right) with NASA Administrator Fletcher in January 1972, three months before Congress approved funding for the Shuttle program
Space Shuttle
–
STS-1 on the launch pad, December 1980
33.
Hyper-X
–
The X-43 was an unmanned experimental hypersonic aircraft with multiple planned scale variations meant to test various aspects of hypersonic flight. It was part of the X-plane series and specifically of NASA's Hyper-X program. It set airspeed records for jet-propelled aircraft. The X-43 is the fastest aircraft on record at approximately 9.6. A winged rocket with the X-43 placed on top, called a "stack", was drop launched from a Boeing B-52 Stratofortress. The first plane in the X-43A, was a single-use vehicle. Three of them were built. Micro Craft Inc. built the X-43A and GASL built its engine. Following the cancelation of the National Aerospace Plane program in November 1994, the United States lacked a cohesive hypersonic technology program. Langley is responsible for hypersonic technology development. Dryden was responsible for research. Phase I was a seven-year, approximately $ million, program to flight-validate scramjet propulsion, hypersonic aerodynamics and design methods. Subsequent phases were not continued as the series of aircraft was replaced by the X-51. The aircraft was a small unpiloted test vehicle measuring just over 3.7 m in length. The vehicle was a lifting design, where the body of the aircraft provides a significant amount of lift for flight, rather than relying on wings.
Hyper-X
–
Pegasus booster accelerating NASA's X-43A shortly after ignition during test flight (March 27, 2004)
Hyper-X
–
Artist's concept of X-43A with scramjet attached to the underside
Hyper-X
–
NASA's B-52B launch aircraft takes off carrying the X-43A hypersonic research vehicle (March 27, 2004)
Hyper-X
–
Full-scale model of the X-43 plane in Langley's 8-foot (2.4 m), high-temperature wind tunnel.
34.
Mach number
–
In fluid dynamics, the Mach number is a dimensionless quantity representing the ratio of flow velocity past a boundary to the local speed of sound. In the simplest explanation, the speed of Mach 1 is equal to the speed of sound. Therefore, Mach 1.35 is about 35 % faster than the speed of sound. Thereby the Mach number, depends on the condition of the surrounding medium, in particular the temperature and pressure. The Mach number is primarily used to determine the approximation with which a flow can be treated as an incompressible flow. The medium can be a liquid. As the Mach number is defined as the ratio of two speeds, it is a dimensionless number. A simplified incompressible flow equations can be used. The Mach number is named after philosopher Ernst Mach, a designation proposed by aeronautical engineer Jakob Ackeret. This is somewhat reminiscent of the modern ocean sounding unit "mark", also unit-first, may have influenced the use of the term Mach. In the decade preceding human flight, aeronautical engineers referred to the speed of sound as Mach's number, never "Mach 1." In dry air at 20 Celsius, the speed of sound is 340.3 m/s. The speed represented by Mach 1 is not a constant; for example, it is mostly dependent on temperature. Mach number is useful because the fluid behaves in a similar manner at a given Mach number, regardless of other variables. In the following table, "ranges of Mach values" are referred to, not the "pure" meanings of the words "subsonic" and "supersonic".
Mach number
–
An F/A-18 Hornet creating a vapor cone at transonic speed just before reaching the speed of sound
35.
Viscous
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the informal concept of "thickness"; for example, honey has a much higher viscosity than water. For a given pattern, the stress required is proportional to the fluid's viscosity. A fluid that has no resistance to shear stress is known as an inviscid fluid. Zero viscosity is observed only at very low temperatures in superfluids. Otherwise, all fluids are technically said to be viscous or viscid. A fluid such as pitch, may appear to be a solid. The word "viscosity" is derived from the Latin "viscum", also a viscous glue made from mistletoe berries. The dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. This fluid has to be homogeneous at different shear stresses. An external force is therefore required in order to keep the top plate moving at constant speed. The proportionality μ in this formula is the viscosity of the fluid. The y-axis, perpendicular to the flow, points in the direction of maximum shear velocity. This equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek mu for the dynamic stress viscosity is common among mechanical and chemical engineers, as well as physicists.
Viscous
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Viscous
–
A simulation of substances with different viscosities. The substance above has lower viscosity than the substance below
Viscous
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Viscous
–
Honey being drizzled.
36.
Euler equations (fluid dynamics)
–
In fluid dynamics, the Euler equations are a set of quasilinear hyperbolic equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. Historically, only the incompressible equations have been derived by Euler. From the mathematical point of view, Euler equations are notably hyperbolic conservation equations in the case without external field. Like any Cauchy equation, the Euler equations originally formulated in convective form can also be put in the "conservation form". The convective form emphasizes changes to the state in a frame of reference moving with the fluid. They were among the partial differential equations to be written down. An additional equation, later to be called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816. G represents body accelerations acting for example gravity, inertial accelerations, electric field acceleration, so on. The first equation is the Euler equation with uniform density. The second equation is the incompressible constraint, stating the velocity is a solenoidal field. The equations above thus represent respectively conservation of momentum. In 3D for example the r and u vectors are explicitly and. Flow pressure are the so-called physical variables. Although Euler first presented these equations in 1755, fundamental questions about them remain unanswered.
Euler equations (fluid dynamics)
–
The "Streamline curvature theorem" states that the pressure at the upper surface of an airfoil is lower than the pressure far away and that the pressure at the lower surface is higher than the pressure far away; hence the pressure difference between the upper and lower surfaces of an airfoil generates a lift force.
37.
Vorticity
–
The vorticity vector would be twice the mean angular vector of those particles relative to their center of mass, oriented according to the right-hand rule. This quantity must not be confused with the velocity of the particles relative to some other point. More precisely, the vorticity is a pseudovector field →, defined as the curl of the flow velocity u → vector. The definition can be expressed by the vector formula: ω → ≡ ∇ × u →, where ∇ is the del operator. The vorticity of a two-dimensional flow therefore can be considered a scalar field. The vorticity is related to the flow's circulation along a closed path by the Stokes' theorem. This applies, to the formation and motion of vortex rings. In a mass of continuum, rotating like a rigid body, the vorticity is twice the angular vector of that rotation. This is the case, of water in a tank, spinning for a while around its vertical axis, at a constant rate. The vorticity may be nonzero even if there is shear. The vorticity will be maximum near the walls, where the shear is largest. Conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the irrotational vortex, where most particles rotate about some straight axis, with speed inversely proportional to their distances to that axis. Another way to visualize vorticity is to imagine that, the rest of the flow disappears. If that tiny solid particle is rotating, rather than just moving with the flow, then there is vorticity in the flow.
Vorticity
–
Continuum mechanics
Vorticity
–
Example flows:
38.
Full potential equation
–
In fluid dynamics, potential flow describes the velocity field as the gradient of a scalar function: the velocity potential. As a result, a potential flow is characterized by an irrotational field, a valid approximation for several applications. The irrotationality of a potential flow is due to the curl of the gradient of a scalar always being equal to zero. Potential theory is applicable. However, potential flows also have been used to describe compressible flows. The potential approach occurs in the modeling of both stationary as well as nonstationary flows. Applications of potential flow are for instance: the outer field for aerofoils, water waves, electroosmotic flow, groundwater flow. For flows with strong vorticity effects, the potential approximation is not applicable. In fluid dynamics, a potential flow is described by means of a velocity potential φ, being a function of time. The flow v is a vector field equal to the gradient, ∇, of the velocity potential φ: v = ∇ φ. Also the definition v = − ∇ φ, with a minus sign, is used. But here we will use the definition above, without the sign. This implies that a potential flow is an irrotational flow. This has direct consequences for the applicability of potential flow. Fortunately, there are often large regions of a flow where the assumption of irrotationality is valid, why potential flow is used for various applications.
Full potential equation
–
Potential-flow streamlines around a NACA 0012 airfoil at 11° angle of attack, with upper and lower streamtubes identified.
39.
Supersonic
–
Supersonic travel is a rate of travel of an object that exceeds the speed of sound. Speeds greater than five times the speed of sound are often referred to as hypersonic. Flights during which only some parts of the air surrounding an object, such as the ends of rotor blades, reach supersonic speeds are called transonic. This occurs typically somewhere between Mach 1.23. Sounds are traveling vibrations in the form of pressure waves in an elastic medium. In gases, pressure has little effect. Since composition varies significantly with altitude, Mach numbers for aircraft may change despite a constant travel speed. In water at temperature supersonic speed can be considered as any speed greater than 1,440 m/s. In solids, sound waves can have even higher velocities. Supersonic fracture is crack motion faster than the speed of sound in a brittle material. The modern term for this meaning is "ultrasonic". The tip of a bullwhip is thought resulting in the telltale "crack". The motion traveling through the bullwhip is what makes it capable of achieving supersonic speeds. There have been supersonic passenger aircraft, namely Concorde and the Tupolev Tu-144. Both some modern fighters are also capable of supercruise, a condition of sustained supersonic flight without the use of an afterburner.
Supersonic
–
A United States Navy F/A-18F Super Hornet in transonic flight
Supersonic
–
U.S. Navy F/A-18 approaching the sound barrier. The white cloud forms as a result of the supersonic expansion fans dropping the air temperature below the dew point.
40.
Hypersonic
–
In aerodynamics, a hypersonic speed is one, highly supersonic. Since the 1970s, the term has generally been assumed to refer above. The hypersonic regime is often alternatively defined as speeds where ramjets do not produce net thrust. Consequently, the distance between the body decreases at higher Mach numbers. A portion of the kinetic energy associated with flow at high Mach numbers transforms into internal energy in the fluid due to viscous effects. The increase in internal energy is realized as an increase in temperature. Generally, NASA defines "high" hypersonic as any Mach number from re-entry speeds as anything greater than Mach 25. Among the aircraft operating in this regime are various developing spaceplanes. In the following table, "ranges of Mach values" are referenced instead of the usual meanings of "subsonic" and "supersonic". For compressible flow, the Mach and Reynolds numbers alone allow good categorization of many flow cases. Hypersonic flows, however, require other similarity parameters. First, the analytic equations for the oblique angle become nearly independent of Mach number at high Mach numbers. Finally, the increased temperature of hypersonic flows mean that real gas effects become important. For this reason, research in hypersonics is often referred to as aerothermodynamics, rather than aerodynamics. The introduction of real gas effects means that more variables are required to describe the full state of a gas.
Hypersonic
–
NASA X-43 at Mach 7
41.
Linearization
–
In mathematics linearization refers to finding the linear approximation to a function at a given point. This method is used in fields such as engineering, ecology. Linearizations of a function are lines—usually lines that can be used for purposes of calculation. In short, linearization approximates the output of a function near x = a. For example, 4 = 2. However, what would be a good approximation of 4.001 = 4 +.001? For any given y = f, f can be approximated if it is near a known point. The most basic requisite is that L a = f, where L a is the linearization of f at x = a. The point-slope form of an equation forms an equation of a line, given a point and slope M. The general form of this equation is: y − K = M. Using the point, L a becomes y = f + M. Because differentiable functions are locally linear, the best slope to substitute in would be the slope of the line tangent to f at x = a. While the concept of local linearity applies the most to points arbitrarily close to x = a, those relatively close work relatively well for linear approximations. The slope M should be, most accurately, the slope of the tangent line at x = a. Visually, the accompanying diagram shows the tangent line of f at x.
Linearization
42.
Conformal transformation
–
In mathematics, a conformal map is a function that preserves angles locally. In the most common case, the function has a domain and an image in the complex plane. Conformal maps preserve the shapes of infinitesimally small figures, but necessarily their size or curvature. The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation. If the Jacobian matrix of the transformation is everywhere a scalar times a rotation matrix, then the transformation is conformal. Conformal maps can be more generally on a semi-Riemannian manifold. An important family of examples of conformal maps comes from complex analysis. If f is antiholomorphic, it still preserves angles, but it reverses their orientation. Since a one-to-one map defined on a open set can not be constant, the open theorem forces the inverse function to be holomorphic. Thus, under this definition, a map is conformal if and only if it is biholomorphic. The two definitions for conformal maps are not equivalent. Being one-to-one and holomorphic implies having a non-zero derivative. However, the exponential function is a holomorphic function with a non-zero derivative, but is not one-to-one since it is periodic. A map of the complex plane onto itself is only if it is a Möbius transformation. Again, for the conjugate, angles are preserved, but orientation is reversed.
Conformal transformation
–
A rectangular grid (top) and its image under a conformal map f (bottom). It is seen that f maps pairs of lines intersecting at 90° to pairs of curves still intersecting at 90°.
43.
Cylinder (geometry)
–
It is one of the most basic curvilinear geometric shapes. If the ends are open, it is called an open cylinder. If the ends are closed by flat surfaces it is called a solid cylinder. The volume of such a cylinder have been known since deep antiquity. The area of the side is also known as L. An open cylinder therefore has surface area L = 2πrh. The area of a closed cylinder is made up the sum of all three components: top, bottom and side. Its area is A = 2πr2 + 2πrh = 2πr = πd = L +2 B, where d is the diameter. For a given volume, the closed cylinder with the smallest area has h = 2r. Equivalently, for a given area, the closed cylinder with the largest volume has h = 2r, i.e. the cylinder fits snugly in a cube. Cylindric sections are the intersections of cylinders with planes. For a circular cylinder, there are four possibilities. A tangent to the cylinder meets the cylinder in a single straight line segment. Moved to itself, the plane either does not intersect the cylinder or intersects it in two parallel line segments. All other planes intersect the cylinder in an ellipse or, when they are perpendicular to the axis of the cylinder, in a circle.
Cylinder (geometry)
–
Tycho Brahe Planetarium building, Copenhagen, its roof being an example of a cylindric section
Cylinder (geometry)
–
A right circular cylinder with radius r and height h.
Cylinder (geometry)
–
In projective geometry, a cylinder is simply a cone whose apex is at infinity, which corresponds visually to a cylinder in perspective appearing to be a cone towards the sky.
44.
Airfoil
–
An airfoil or aerofoil is the shape of a wing, blade, or sail. An airfoil-shaped body moved through a fluid produces an aerodynamic force. The component of this perpendicular to the direction of motion is called lift. The parallel to the direction of motion is called drag. Foils of similar function designed as the working fluid are called hydrofoils. The lift on an airfoil is primarily the result of its angle of shape. When oriented at a suitable angle, the airfoil deflects the oncoming air, resulting to the deflection. This force can be resolved into two components: lift and drag. Cambered airfoils can generate lift at zero angle of attack. A fixed-wing aircraft's wings, horizontal, vertical stabilizers are built with airfoil-shaped cross sections, as are rotor blades. Airfoils are also found in propellers, fans, turbines. An airfoil-shaped wing can create downforce on improving traction. Airfoils are more efficient lifting shapes, able to generate lift with less drag. A lift and curve obtained in wind tunnel testing is shown on the right. The curve represents an airfoil with a positive camber so some lift is produced at zero angle of attack.
Airfoil
–
Lift and Drag curves for a typical airfoil
Airfoil
–
Examples of airfoils in nature and within various vehicles. Though not strictly an airfoil, the dolphin flipper obeys the same principles in a different fluid medium.
Airfoil
–
An airfoil section is displayed at the tip of this Denney Kitfox aircraft, built in 1991.
Airfoil
–
Airfoil of Kamov Ka-26 helicopters
45.
Lewis Fry Richardson
–
He is also noted for solving a system of linear equations known as modified Richardson iteration. Lewis Fry Richardson was the youngest of seven children born to David Richardson. They were David Richardson operating a successful tanning and leather manufacturing business. In 1898 he went on to Durham College of Science where he took courses in chemistry, botany, zoology. At age 47 he received a doctorate from the University of London. Richardson's working life represented his eclectic interests: National Physical Laboratory. University College Aberystwyth. Chemist, National Peat Industries. National Physical Laboratory. Manager of the physical and chemical laboratory, Sunbeam Lamp Company. Manchester College of Technology. Meteorological Office – as superintendent of Eskdalemuir Observatory. Friends Ambulance Unit in France. Meteorological Office at Benson, Oxfordshire. Head of the Physics Department at Westminster Training College.
Lewis Fry Richardson
–
Lewis Fry Richardson D.Sc., FRS
46.
ENIAC
–
ENIAC was amongst the earliest electronic general-purpose computers made. It was Turing-complete, could solve "a large class of numerical problems" through reprogramming. ENIAC was heralded as a "Giant Brain" by the press. ENIAC's construction was financed by the United States Army, Ordnance Corps, Research and Development Command, led by Major General Gladeon M. Barnes. The total cost was about $487,000, which equates to $6,816,000 in 2016. ENIAC was designed by J. Presper Eckert of the University of Pennsylvania, U.S.. In 1946, the researchers formed the Eckert-Mauchly Computer Corporation. ENIAC was a modular computer, composed of individual panels to perform different functions. Twenty of these modules were accumulators which hold a ten-digit decimal number in memory. Numbers were passed between these units across general-purpose buses. Key to its versatility was the ability to branch; it could trigger different operations, depending on the sign of a computed result. It weighed more than 30 short tons, consumed 150 kW of electricity. This requirement led to the rumor that whenever the computer was switched on, lights in Philadelphia dimmed. An IBM card punch was used for output. These cards could be used to produce printed offline using an IBM accounting machine, such as the IBM 405.
ENIAC
–
ENIAC
ENIAC
–
Glen Beck (background) and Betty Snyder (foreground) program ENIAC in BRL building 328. (U.S. Army photo)
ENIAC
–
Cpl. Irwin Goldstein (foreground) sets the switches on one of ENIAC's function tables at the Moore School of Electrical Engineering. (U.S. Army photo) This photo has been artificially darkened, obscuring details such as the women who were present and the IBM equipment in use.
ENIAC
–
A function table from ENIAC on display at Aberdeen Proving Ground museum.
47.
Three-dimensional space
–
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the informal meaning of the dimension. In mathematics, a sequence of n numbers can be understood as a location in n-dimensional space. When n = 3, the set of all such locations is called Euclidean space. It is commonly represented by the ℝ3. This serves as a three-parameter model of the physical universe in which all known matter exists. However, this space is only one example of a large variety of spaces in three dimensions called 3-manifolds. Furthermore, in this case, these three values can be labeled by any combination of three chosen from height, depth, breadth. In mathematics, analytic geometry describes every point in three-dimensional space by means of three coordinates. Three coordinate axes are given, each perpendicular at the origin, the point at which they cross. They are usually labeled x, y, z. See Euclidean space. Below are images of the above-mentioned systems. Two distinct points always determine a line. Three distinct points determine a unique plane.
Three-dimensional space
–
Three-dimensional Cartesian coordinate system with the x -axis pointing towards the observer. (See diagram description for correction.)
48.
Los Alamos National Lab
–
Los Alamos National Laboratory is one of two laboratories in the United States in which classified work towards the design of nuclear weapons has been undertaken. LANL is a United States Department of Energy national laboratory, managed and operated by Los Alamos National Security, located in Los Alamos, New Mexico. The laboratory is one of the largest science and technology institutions in the world. It conducts multidisciplinary research in fields such as national security, space exploration, supercomputing. General Leslie Groves wanted a central laboratory at an isolated location for safety, to keep the scientists away from the populace. It should be at least 200 miles from international boundaries and west of the Mississippi. Major John Dudley suggested Oak City, Utah or Jemez Springs, New Mexico but both were rejected. Dudley had rejected the school as not meeting Groves’ criteria, but as soon as Groves saw it he said in effect This is the place. Oppenheimer became the laboratory's first director. During the Manhattan Project, Los Alamos hosted thousands of employees, including many Nobel Prize-winning scientists. The location was a total secret. Its only address was number 1663, in New Mexico. Eventually two other office boxes 1539, also in Santa Fe. Though its contract with the University of California was initially intended to be temporary, the relationship was maintained long after the war. The other two were weapons, "Little Boy" and "Fat Man", which were used in the attacks on Hiroshima and Nagasaki.
Los Alamos National Lab
–
Aerial view
Los Alamos National Lab
–
Los Alamos National Laboratory
Los Alamos National Lab
–
The first stages of the explosion of the Trinity nuclear test.
Los Alamos National Lab
–
Sites
49.
Douglas Aircraft
–
The Douglas Aircraft Company was an American aerospace manufacturer based in Southern California. It was founded in 1921 by Donald Wills Douglas, Sr. and later merged with McDonnell Aircraft in 1967 to form McDonnell Douglas. Douglas Aircraft Company largely operated as a division of McDonnell Douglas after the merger. MD later merged with Boeing in 1997. The Douglas Aircraft Company was founded in California, following dissolution of the Davis-Douglas Company. An early claim to fame was the first circumnavigation of the world by air in Douglas airplanes in 1924. Donald Douglas proposed a modified Douglas DT to meet the Army's needs. The two-place, DT torpedo bomber had previously been produced for the U.S. Navy. The DTs were taken in Rock Island, Illinois and Dayton Ohio to be modified. The modified aircraft known as the Douglas World Cruiser, also was the major project for Jack Northrop who designed the system for the series. These were sent to airports along the route. The last of these aircraft was delivered to the U.S. Army on 11 March 1924. After the success of the World Cruiser, the Army Air Service ordered six similar aircraft as observation aircraft. Douglas adopted a logo that showed aircraft circling a globe, replacing the winged logo. The logo evolved into a globe.
Douglas Aircraft
–
Machine tool operator at the Douglas Aircraft plant, Long Beach, California in World War II. After losing thousands of workers to military service, American manufacturers hired women for production positions, to the point where the typical aircraft plant's workforce was 40% female.
Douglas Aircraft
–
Douglas Aircraft Company
Douglas Aircraft
–
Women at work on bomber, Douglas Aircraft Company, Long Beach, California in October 1942
Douglas Aircraft
–
An ex-USAF C-47A Skytrain, the military version of the DC-3, on display in England in 2010. This aircraft flew from a base in Devon, England, during the Invasion of Normandy.
50.
Boeing
–
The Boeing Company is an American multinational corporation that designs, manufactures, sells airplanes, rotorcraft, rockets, satellites worldwide. The company also provides leasing and support services. Boeing stock is a component of the Dow Jones Industrial Average. The company is led by President and CEO Dennis Muilenburg. Boeing is organized into five primary divisions: Boeing Commercial Airplanes; Boeing Defense, Space & Security; Engineering, Operations & Technology; Boeing Capital; and Boeing Shared Services Group. In March 1910, William E. Boeing bought Heath's shipyard in Seattle on the Duwamish River, which later became his first airplane factory. Boeing was incorporated in Seattle on July 15, 1916, as "Pacific Aero Products Co". Boeing was later incorporated in Delaware, the original Certificate of Incorporation was filed on July 19, 1934. Boeing, who studied at Yale University, worked initially in the industry, where he became wealthy and learned about wooden structures. This knowledge proved invaluable in his subsequent assembly of airplanes. The company stayed in Seattle to take advantage of the local supply of spruce wood. Boeing and Westervelt decided to build the B&W seaplane after having flown in a Curtiss aircraft. Boeing was taught to fly by Glenn Martin himself. His friend Cdr.
Boeing
–
Replica of Boeing's first plane, the Boeing Model 1, at the Museum of Flight
Boeing
–
William E. Boeing in 1929
Boeing
–
Boeing 377 Stratocruiser
Boeing
–
The Boeing 707 in British Overseas Airways Corporation (BOAC) livery, 1964
51.
Lockheed Corporation
–
The Lockheed Corporation was an American aerospace company. Lockheed was founded in 1912 and later merged with Martin Marietta to form Lockheed Martin in 1995. The Alco Hydro-Aeroplane Company was established in San Francisco in 1912 by the brothers Allan and Malcolm Loughead. Following the Model F-1, the company invested heavily in the development of a revolutionary aircraft called the Model S-1. The Loughead Aircraft Manufacturing Company closed its doors in 1921. In 1926, Kenneth Jay secured funding to form the Lockheed Aircraft Company in Hollywood. This new company utilized some of the same technology originally developed for the Model S-1 to design the Vega Model. In March 1928, the company relocated by year's end reported sales exceeding million dollars. From 1926-28 the company produced over 80 aircraft and employed more than 300 workers who by April 1929 were building five aircraft per week. In July 1929, majority shareholder Fred Keeler sold 87% of the Lockheed Aircraft Company to Detroit Aircraft Corporation. In August 1929, Allan Lockheed resigned. The Great Depression ruined the aircraft market, Detroit Aircraft went bankrupt. A group of investors headed by brothers Robert and Courtland Gross, Walter Varney, bought the company out of receivership in 1932. The syndicate bought the company for a mere $40,000. Courtlandt S. Gross was a executive, succeeding Robert as Chairman following his death in 1961.
Lockheed Corporation
–
P-38J Lightning Yippee
Lockheed Corporation
–
P-38 Lightning assembly line at the Lockheed plant, Burbank, California in World War II. In June 1943, this assembly line was reconfigured into a mechanized line, which more than doubled the rate of production. The transition to the new system was accomplished in only eight days. During this time production never stopped. It was continued outdoors.
Lockheed Corporation
–
A Lockheed L-049 Constellation sporting the livery of Trans World Airlines at the Pima Air & Space Museum.
Lockheed Corporation
–
The Lockheed U-2, which first flew in 1955, provided intelligence on Soviet bloc countries.
52.
Douglas Aircraft Company
–
The Douglas Aircraft Company was an American aerospace manufacturer based in Southern California. It was founded in 1921 by Donald Wills Douglas, Sr. and later merged with McDonnell Aircraft in 1967 to form McDonnell Douglas. Douglas Aircraft Company largely operated as a division of McDonnell Douglas after the merger. MD later merged with Boeing in 1997. The Douglas Aircraft Company was founded by Donald Wills Douglas, Sr. on July 22, 1921 in Santa Monica, California, following dissolution of the Davis-Douglas Company. An early claim to fame was the first circumnavigation of the world by air in Douglas airplanes in 1924. Donald Douglas proposed a modified Douglas DT to meet the Army's needs. The two-place, open cockpit DT biplane torpedo bomber had previously been produced for the U.S. Navy. The DTs were taken from the assembly lines at the company's manufacturing plants in Rock Island, Illinois and Dayton, Ohio to be modified. The modified aircraft known as the Douglas World Cruiser, also was the first major project for Jack Northrop who designed the fuel system for the series. These were sent to airports along the route. The last of these aircraft was delivered to the U.S. Army on 11 March 1924. After the success of the World Cruiser, the Army Air Service ordered six similar aircraft as observation aircraft. Douglas adopted a logo that showed aircraft circling a globe, replacing the original winged heart logo. The logo evolved into an aircraft, a rocket, a globe.
Douglas Aircraft Company
–
Machine tool operator at the Douglas Aircraft plant, Long Beach, California in World War II. After losing thousands of workers to military service, American manufacturers hired women for production positions, to the point where the typical aircraft plant's workforce was 40% female.
Douglas Aircraft Company
–
Women at work on bomber, Douglas Aircraft Company, Long Beach, California in October 1942
Douglas Aircraft Company
–
An ex-USAF C-47A Skytrain, the military version of the DC-3, on display in England in 2010. This aircraft flew from a base in Devon, England, during the Invasion of Normandy.
Douglas Aircraft Company
–
Douglas DC-3
53.
McDonnell Aircraft
–
The McDonnell Aircraft Corporation was an American aerospace manufacturer based in St. Louis, Missouri. McDonnell Aircraft later merged with the Douglas Aircraft Company to form McDonnell Douglas in 1967. Jim McDonnell founded J.S. McDonnell & Associates in Milwaukee, Wisconsin in 1928 to produce a personal aircraft for family use. The economic depression from 1929 ruined the company collapsed. He went to work for Glenn L. Martin. He left in 1938 to try again with McDonnell Aircraft Corporation, based near St. Louis, Missouri, in 1939. World War II was a major boost to the new company. McDonnell also developed the LBD-1 Gargoyle guided missile. McDonnell Aircraft heavily cut its workforce. The advent of the Korean War helped push McDonnell into a major military fighter role. Dave Lewis joined the company in 1946. He led the development of the legendary F-4 Phantom II in 1954, introduced into service in 1960. Lewis finally became President and Chief Operating Officer in 1962. Lewis went on to manage Douglas Aircraft Division in 1967 after the McDonnell Douglas merger.
McDonnell Aircraft
–
An FH-1 Phantom, in 1948.
McDonnell Aircraft
–
McDonnell F2H Banshee, F3H Demon, and F4H Phantom II.
54.
NASA
–
President Dwight D. Eisenhower established NASA in 1958 with a distinctly civilian orientation encouraging peaceful applications in science. The National Aeronautics and Space Act was passed on July 1958, disestablishing NASA's predecessor, the National Advisory Committee for Aeronautics. The new agency became operational on October 1958. The agency is also responsible for the Launch Services Program which provides countdown management for unmanned NASA launches. NASA shares data such as from the Greenhouse Gases Observing Satellite. From 1946, the National Advisory Committee for Aeronautics had been experimenting with rocket planes such as the supersonic Bell X-1. In the early 1950s, there was challenge to launch an artificial satellite for the International Geophysical Year. An effort for this was the American Project Vanguard. This led to an agreement that a federal agency mainly based on NACA was needed to conduct all non-military activity in space. The Advanced Research Projects Agency was created in February 1958 to develop technology for military application. On July 1958, Eisenhower signed the National Aeronautics and Space Act, establishing NASA. A NASA seal was approved by President Eisenhower in 1959. Elements of the United States Naval Research Laboratory were incorporated into NASA. Many of ARPA's early space programs were also transferred to NASA. In December 1958, NASA gained control of the Jet Propulsion Laboratory, a facility operated by the California Institute of Technology.
NASA
–
1963 photo showing Dr. William H. Pickering, (center) JPL Director, President John F. Kennedy, (right). NASA Administrator James Webb in background. They are discussing the Mariner program, with a model presented.
NASA
–
Seal of NASA
NASA
–
At launch control for the May 28, 1964, Saturn I SA-6 launch. Wernher von Braun is at center.
NASA
–
Mercury-Atlas 6 launch on February 20, 1962
55.
Submarine
–
A submarine is a watercraft capable of independent operation underwater. It differs from a submersible, which has more limited underwater capability. The term most commonly refers to a crewed vessel. Used as an adjective in phrases such as submarine cable, submarine means "under the sea". The submarine evolved as a shortened form of submarine boat. For reasons of naval tradition, submarines are usually referred to as "boats" rather than as "ships", regardless of their size. They were adopted by several navies. Submarines were first widely used during World War I, now figure in many navies small. Civilian uses for submarines include maintenance. Submarines can also be modified to perform more specialized functions such as undersea cable repair. Submarines are also used for undersea archaeology. In modern submarines, this structure is "fin" in European usage. Various hydrodynamic control fins. Smaller, specialty submarines may deviate significantly from this traditional layout. Submarines change the amount of air in their ballast tanks to decrease buoyancy for submerging or increase it for surfacing.
Submarine
–
A Russian Navy Typhoon-class submarine underway. Also known as "Project 941".
Submarine
–
Drebbel, the first navigable submarine
Submarine
–
The French submarine Plongeur
Submarine
–
The Nordenfelt -designed, Ottoman submarine Abdül Hamid
56.
Ship
–
A ship is a large buoyant watercraft. Ships are generally distinguished from boats based on passenger capacity. Historically, a "ship" was a vessel with at least three square-rigged masts and a full bowsprit. In in daily life, ships have become an integral part of modern commercial and military systems. Fishing boats are used throughout the world. Military forces operate vessels to transport and support forces ashore. Nearly 35,000 in number, carried 7.4 billion tons of cargo in 2007. As of 2011, there are about 104,304 ships in the world. Ships were always a key in scientific and technological development. Navigators such as Zheng He spread such inventions as gunpowder. Ships have served scientific, cultural, humanitarian needs. After the 16th century, new crops that had come to the Americas via the European seafarers significantly contributed to the world population growth. Transport has shaped the world's economy into today's energy-intensive pattern. There is no universal definition of what distinguishes a ship from a boat. Ships can usually be distinguished from boats based on the ship's ability to operate independently for extended periods.
Ship
–
Italian full-rigged ship Amerigo Vespucci in New York Harbor, 1976
Ship
–
A raft is among the simplest boat designs.
Ship
–
Roman trireme mosaic from Carthage, Bardo Museum, Tunis.
Ship
–
A Japanese atakebune from the 16th century
57.
Automobile
–
A car is a wheeled, self-powered motor vehicle used for transportation and a product of the automotive industry. The year 1886 is regarded as the birth year of the modern car. In that year, German inventor Karl Benz built the Benz Patent-Motorwagen. Cars did not become widely available until the early 20th century. One of the first cars, accessible to the masses was the 1908 Model T, an American car manufactured by the Ford Motor Company. Cars are equipped with controls used for driving, parking, passenger comfort and safety, controlling a variety of lights. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. Examples include rear navigation systems, in car entertainment. Most cars in use in the 2010s are propelled by an internal engine, fueled by deflagration of diesel. Both fuels are also blamed for contributing to global warming. Vehicles using alternative fuels such as ethanol flexible-fuel vehicles and natural gas vehicles are also gaining popularity in some countries. Electric cars, which were invented early in the history of the car, began to become commercially available in 2008. There are benefits to use. Road traffic accidents are the largest cause of injury-related deaths worldwide. The benefits may include on-demand transportation, convenience.
Automobile
–
Benz "Velo" model (1894) by German inventor Carl Benz – entered into an early automobile race as a motocycle
Automobile
–
A modern car, BMW E90
Automobile
–
Cugnot's 1771 fardier à vapeur, as preserved at the Musée des Arts et Métiers, Paris
Automobile
–
Karl Benz, the inventor of the modern car
58.
Helicopter
–
A helicopter is a type of rotorcraft in which lift and thrust are supplied by rotors. This allows the helicopter to take off and laterally. These attributes allow helicopters to be used in isolated areas where fixed-wing aircraft and many forms of VTOL aircraft can not perform. English language nicknames for helicopter include "chopper", "copter", "helo", "whirlybird". Helicopters were built during the first half-century of flight, with the Focke-Wulf Fw 61 being the first operational helicopter in 1936. It was not until 1942 that a helicopter designed by Igor Sikorsky reached full-scale production, with 131 aircraft built. Tandem rotor helicopters are also in widespread use due to their greater payload capacity. Coaxial helicopters, compound helicopters are all flying today. Other types of multicopter have been developed for specialized applications such as unmanned drones. The earliest references for vertical flight came from China. Since around 400 BC, Chinese children have played with bamboo flying toys. This bamboo-copter is spun by rolling a stick attached to a rotor. The toy flies when released. The 4th-century AD Daoist book Baopuzi by Ge Hong reportedly describes some of the ideas inherent to rotary aircraft. Designs similar to the Chinese toy appeared in Renaissance paintings and other works.
Helicopter
–
A police department Bell 206 helicopter
Helicopter
–
A decorated Japanese taketombo bamboo-copter
Helicopter
–
Leonardo's "aerial screw"
Helicopter
–
Prototype created by M. Lomonosov, 1754
59.
Aircraft
–
An aircraft is a machine, able to fly by gaining support from the air. The human activity that surrounds aircraft is called aviation. Aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as lift type, others. Each of the two World Wars led to great technical advances. Consequently, the history of aircraft can be divided into five eras: Pioneers of flight, from the earliest experiments to 1914. First World War, 1914 to 1918. Aviation between the World Wars, 1918 to 1939. Second World War, 1939 to 1945. Postwar era, also called the jet age, 1945 to the present day. Aerostats use buoyancy to float in the air in much the same way that ships float on the water. A balloon was originally any aerostat, while the airship was used for powered aircraft designs -- usually fixed-wing. In 1919 Frederick Handley Page was reported as referring to "ships of the air," with smaller passenger types as "Air yachts." In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". – though none had yet been built.
Aircraft
–
NASA test aircraft
Aircraft
–
The Mil Mi-8 is the most-produced helicopter in history
Aircraft
–
"Voodoo" a modified P 51 Mustang is the 2014 Reno Air Race Champion
Aircraft
–
A hot air balloon in flight
60.
Wind turbines
–
A wind turbine is a device that converts the wind's kinetic energy into electrical power. Wind turbines are manufactured in a wide range of vertical and horizontal axis types. The smallest turbines are used for applications such as battery charging for auxiliary power for boats or caravans or to power traffic warning signs. Windmills were used in Persia about 500-900 A.D. The windwheel of Hero of Alexandria marks one of the first known instances of wind powering a machine in history. However, the first known practical windmills were built in Sistan, an Eastern province of Iran, from the 7th century. These "Panemone" were vertical axle windmills, which had long vertical drive shafts with rectangular blades. Windmills first appeared in Europe during the Middle Ages. By the 14th century, Dutch windmills were in use to drain areas of the Rhine delta. Advanced wind mills were described by Croatian inventor Fausto Veranzio. In his book Machinae Novae he described vertical axis wind turbines with curved or V-shaped blades. Some months later American inventor Charles F. Although Blyth's turbine was considered uneconomical in the United Kingdom electricity generation by wind turbines was more cost effective in countries with widely scattered populations. The largest machines were on 24-meter towers with four-bladed 23-meter diameter rotors. By 1908 there were 72 wind-driven electric generators operating in the United States from 5 kW to 25 kW.
Wind turbines
–
Offshore wind farm, using 5 MW turbines REpower 5M in the North Sea off the coast of Belgium.
Wind turbines
–
James Blyth's electricity-generating wind turbine, photographed in 1891
Wind turbines
–
The first automatically operated wind turbine, built in Cleveland in 1887 by Charles F. Brush. It was 60 feet (18 m) tall, weighed 4 tons (3.6 metric tonnes) and powered a 12 kW generator.
Wind turbines
–
Nordex N117/2400 in Germany, a modern low-wind turbine.
61.
Yacht
–
A yacht /ˈjɒt/ is a recreational boat or ship. In modern use of the term, yachts differ from working ships mainly by their purpose. There are two different classes of yachts: power boats. With other types of powerboat, sailing vessels in general came to be perceived as luxury, or recreational vessels. Later the term came to encompass large motor boats for primarily private pleasure purposes well. Yacht lengths normally range to dozens of meters. A craft smaller than 12 metres is more commonly called a cabin cruiser or simply a cruiser. A megayacht generally refers to any yacht over 50 metres. This size is small in relation to typical cruise liners and oil tankers. A few countries have a special flag worn by recreational ships, which indicates the nationality of the ship. Although inspired by the national flag, the ensign does not always correspond with the civil or merchant ensign of the state in question. Yacht ensigns differ from merchant ensigns in order to signal that the yacht is not carrying cargo that requires a customs declaration. Carrying commercial cargo on a boat with a ensign is deemed to be smuggling in many jurisdictions. A much wider range of materials is used today. Although wood hulls are still in production, the most common material is fibreglass, followed by aluminium, steel, carbon fibre, ferrocement.
Yacht
–
Sailing Yacht "Zapata II"
Yacht
–
The "Lazzara" 80' "Alchemist" runs at full speed up the California Coast
Yacht
–
A yacht in Lorient, Brittany, France
Yacht
–
Aerial view of a yacht club and marina - Yacht Harbour Residence "Hohe Düne" in Rostock, Germany.
62.
Boundary layer
–
On an wing the boundary layer is the part of the flow close to the wing, where viscous forces distort the surrounding non-viscous flow. Laminar boundary layers can be loosely classified according to the circumstances under which they are created. When a fluid rotates and forces are balanced by the Coriolis effect, an Ekman layer forms. In the theory of transfer, a thermal boundary layer occurs. A surface can have multiple types of layer simultaneously. The viscous nature of airflow is responsible for skin friction. The layer of air over the wing's surface, stopped by viscosity, is the boundary layer. There are two different types of boundary flow: laminar and turbulent. Laminar Boundary Layer Flow The boundary is a very smooth flow, while the turbulent boundary layer contains swirls or "eddies." The flow creates less skin friction drag than the turbulent flow, but is less stable. Boundary flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the leading edge, the laminar layer increases in thickness. Turbulent Boundary Layer Flow At some distance back from the leading edge, the smooth flow breaks down and transitions to a turbulent flow. The low energy flow, however, tends to break down more suddenly than the turbulent layer. This allows a closed-form solution for the flow in a significant simplification of the full Navier -- Stokes equations.
Boundary layer
–
Ludwig Prandtl
63.
University of Stuttgart
–
The University of Stuttgart is a university located in Stuttgart, Germany. It is organized into 10 faculties. It is one of the top nine leading technical universities in Germany with highly ranked programs in civil, mechanical, electrical engineering. The academic tradition of the University of Stuttgart goes back to its probably most famous student: Gottlieb Daimler, the Inventor of the automobile. These four universities, in combination with RWTH Aachen are the top five universities of the aforementioned TU9. From 1770 to 1794, the Karlsschule was the first university in Stuttgart. Located in Stuttgart-Hohenheim, it is not related to the University of Stuttgart, except for some joint activities. What is now the University of Stuttgart celebrated its 175th anniversary in 2004. Of the increasing importance of the technical sciences and instruction in these fields, from 1876 the university was known as the Technical College. In 1900 it was awarded the right to grant doctoral degrees in the technical disciplines. The development of the courses of study at the Technical College of Stuttgart led to the present-day "Universität Stuttgart". Since the end of the 1950s, a part of the university has been located in the suburb of Stuttgart-Vaihingen. Most technical subjects are located in Vaihingen, while the humanities, the social sciences, similar topics are still located in the city center campus. As of 2014, University of Stuttgart is ranked 85th in the world according to QS World University Rankings.
University of Stuttgart
–
Mensa building at the main campus
University of Stuttgart
–
Campus at Vaihingen
University of Stuttgart
–
International Centrum at the University of Stuttgart
University of Stuttgart
–
Keplerstraße 11 ("K1", right) and 17 ("K2", left) in the city center
64.
MIT
–
The Massachusetts Institute of Technology is a private research university in Cambridge, Massachusetts. Researchers worked during World War II and the Cold War. Post-war research contributed to the rapid expansion of the faculty and campus under James Killian. The 168-acre campus opened in 1916 and extends over 1 mile along the northern bank of the Charles River basin. It is often cited as among the world's top universities. The aggregated revenues of companies founded by MIT alumni would rank as the eleventh-largest economy in the world. A professor from the University of Virginia, wanted to establish an institution to address rapid scientific and technological advances. The Rogers Plan reflected the German research model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories. Two days after the charter was issued, the first battle of the Civil War broke out. After a long delay through the war years, MIT's first classes were held in 1865. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst. In 1866, the proceeds from land sales went in the Back Bay. MIT was informally called "Boston Tech". The institute emphasized laboratory instruction from an early date. Despite financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker.
MIT
–
Stereographic card showing an MIT mechanical drafting studio, 19th century (photo by E.L. Allen, left/right inverted)
MIT
–
Massachusetts Institute of Technology
MIT
–
A 1905 map of MIT's Boston campus
MIT
–
Plaque in Building 6 honoring George Eastman, founder of Eastman Kodak, who was revealed as the anonymous "Mr. Smith" who helped maintain MIT's independence
65.
Grumman Aircraft
–
The Grumman Aircraft Engineering Corporation, later Grumman Aerospace Corporation, was a leading 20th century U.S. producer of military and civilian aircraft. All of the early Grumman employees were former Loening employees. The company was named for Grumman because he was its largest investor. The company filed as a business on December 5, 1929, opened its doors on January 2, 1930. Keeping busy by welding aluminum tubing for truck frames, the company eagerly pursued contracts with the US Navy. Grumman designed the first practical floats with a retractable landing gear for the Navy, this launched Grumman into the aviation market. The first Grumman aircraft was also for a biplane with retractable gear. This was followed by a number of other successful designs. Grumman ranked 22nd among United States corporations in the value of wartime production contracts. Grumman's first jet aircraft was the F9F Panther; it was followed by the upgraded F9F/F-9 Cougar, the less well known F-11 Tiger in the 1950s. Grumman products were prominent in the film Top Gun and numerous World War II naval and Marine Corps aviation films. Grumman was the chief contractor on the Apollo Lunar Module that landed men on the moon. The firm received the contract on November 7, 1962, built 13 lunar modules. The company ended up involved in the shuttle program nonetheless, as a subcontractor to Rockwell, providing the wings and vertical stabilizer sections. In 1969 the company changed its name to Grumman Aerospace Corporation, in 1978 it sold the Grumman-American Division to Gulfstream Aerospace.
Grumman Aircraft
–
Grumman Historical Marker
Grumman Aircraft
–
Grumman Corporation
Grumman Aircraft
–
Apollo Spacecraft: Apollo Lunar Module Diagram
Grumman Aircraft
–
F-14 Tomcat at Grumman Memorial Park, Calverton, New York
66.
New York University
–
New York University is a private nonprofit nonsectarian research university based in New York City. Founded in 1831, NYU is considered one of the world's most foremost research universities. NYU is organized into more than 20 schools, institutes, located in six centers throughout Manhattan and Downtown Brooklyn. NYU enrolls 24,000 post-graduate students from a wide variety of religious, ethnic, geographic backgrounds, including 183 foreign countries. NYU was elected to the Association of American Universities in 1950. 17 living billionaires. NYU's teams are called the Violets, the colors being the trademarked hue "NYU Violet" and white; the school mascot is the bobcat. Almost all sporting teams participate in the University Athletic Association. Albert Gallatin was elected as the institution's first president. The university was officially renamed New York University in 1896. In 1832, NYU held its first classes in rented rooms of four-story Clinton Hall, situated near City Hall. In 1835, NYU's first professional school, was established. American Chemical Society was founded at NYU. It became one of the nation's largest universities, in 1917. NYU had its Washington Square campus since its founding.
New York University
–
Albert Gallatin
New York University
–
New York University
New York University
–
The University Heights campus, now home to Bronx Community College
New York University
–
The Silver Center c. 1900
67.
Grumman Aerospace
–
The Grumman Aircraft Engineering Corporation, later Grumman Aerospace Corporation, was a leading 20th century U.S. producer of military and civilian aircraft. All of the early Grumman employees were former Loening employees. The company was named for Grumman because he was its largest investor. The company opened its doors on January 2, 1930. Keeping busy by aluminum tubing for truck frames, the company eagerly pursued contracts with the US Navy. This launched Grumman into the aviation market. The first Grumman aircraft was also for the Grumman FF-1, a biplane with retractable landing gear. This was followed by a number of successful designs. Grumman ranked 22nd in the value of wartime production contracts. Grumman's first aircraft was the F9F Panther; it was followed by the upgraded F9F/F -9 Cougar, the less well known F-11 Tiger in the 1950s. Grumman products were prominent in numerous World War II naval and Marine Corps aviation films. Grumman was the chief contractor on the Apollo Lunar Module that landed men on the moon. The firm built 13 lunar modules. The company ended up involved in the program nonetheless, as a subcontractor to Rockwell, providing the wings and vertical stabilizer sections. In 1978 it sold the Grumman-American Division to Gulfstream Aerospace.
Grumman Aerospace
–
Grumman Historical Marker
Grumman Aerospace
–
Grumman Corporation
Grumman Aerospace
–
Apollo Spacecraft: Apollo Lunar Module Diagram
Grumman Aerospace
–
F-14 Tomcat at Grumman Memorial Park, Calverton, New York
68.
Antony Jameson
–
Antony Jameson FREng is Professor of Engineering in the Department of Aeronautics & Astronautics at Stanford University. Jameson is known for his pioneering work in the field of Computational Fluid Dynamics. He has published more than 300 scientific papers in a wide range of areas including computational fluid dynamics, control theory. Born in Gillingham, Kent, UK Jameson spent much of his early childhood in India where his father was stationed as a British Army Officer. He first attended school at Shimla. Subsequently he was educated at Mowden Hall School and Winchester College. Jameson was sent to Malaya. Jameson graduated in 1958. He was a Research Fellow of Trinity Hall from 1960-1963. On leaving Cambridge he worked in 1964-1965. He then became Chief Mathematician in Coventry. In 1966, Jameson joined the Aerodynamics Section of Grumman Aircraft Engineering Corporation in New York. In this period, his work was largely directed to stability augmentation systems. Starting in 1970, he began to concentrate on the problem of predicting transonic flow. It was clear that new methods would have to be developed.
Antony Jameson
–
Antony Jameson in 2008
69.
Cartesian coordinate system
–
In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n. These coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing. The adjective Cartesian refers to the French Mathematician and Philosopher René Descartes who published this idea in 1637. It was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a variable length measured in reference to this axis. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes' work. Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, the spherical and cylindrical coordinates for three-dimensional space. The development of the Cartesian coordinate system would play a fundamental role in the development of the Calculus by Isaac Newton and Gottfried Wilhelm Leibniz. The two-coordinate description of the plane was later generalized into the concept of vector spaces. A line with a chosen Cartesian system is called a number line.
Cartesian coordinate system
–
The right hand rule.
Cartesian coordinate system
–
Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.
Cartesian coordinate system
–
3D Cartesian Coordinate Handedness
70.
Georgia Institute of Technology
–
The Georgia Institute of Technology is a public research university in Atlanta, Georgia, in the United States. It is a part of the University System of Georgia and has satellite campuses in Savannah, Georgia; Metz, France; Athlone, Ireland; Shenzhen, China; and Singapore. Initially, it offered only a degree in mechanical engineering. By 1901, its curriculum had expanded to include civil, engineering. In 1948, the school changed its name to reflect its evolution to a larger and more capable technical institute and university. Georgia Tech contains about 31 departments/units, with emphasis on science and technology. It is well recognized in engineering, computing, business administration, the sciences, liberal arts. Student athletics, both intramural, are a part of student and life. Georgia Tech fields seven women's teams that compete in the Football Bowl Subdivision. Georgia Tech is a member of the Coastal Division in the Atlantic Coast Conference. The idea of a school in Georgia was introduced during the Reconstruction period. However, because technical developments were occurring, a technology school was needed. In 1882, the Georgia State Legislature authorized a committee, led by Harris, to visit the Northeast to see firsthand how technology schools worked. They were impressed by the educational models developed at the Worcester County Free Institute of Industrial Science. On October 1885, Georgia Governor Henry D. McDaniel signed the bill to fund the new school.
Georgia Institute of Technology
–
Atlanta during the Civil War (c. 1864)
Georgia Institute of Technology
–
Georgia Institute of Technology
Georgia Institute of Technology
–
An early picture of Georgia Tech
Georgia Institute of Technology
–
Former Georgia Tech President G. Wayne Clough speaks at a student meeting.
71.
Overflow (software)
–
OVERFLOW - the OVERset grid FLOW solver - is a software package for simulating fluid flow around solid bodies using computational fluid dynamics. It is a 3-D flow solver that solves the time-dependent, Reynolds-averaged, Navier-Stokes equations using multiple overset structured grids. OVERFLOW was developed in Houston, Texas and NASA Ames Research Center in Moffett Field, California. The driving force behind this work was the need for evaluating the flow about the Space Shuttle vehicle. Scientists use OVERFLOW to better understand the aerodynamic forces by evaluating the flowfield surrounding the vehicle. OVERFLOW has also been used to simulate the effect of debris on the space shuttle vehicle. Computational fluid dynamics Official NASA OVERFLOW CFD Code web site Article on OVERFLOW from NASA Insights
Overflow (software)
–
This image depicts the flowfield around the Space Shuttle Launch Vehicle traveling at Mach 2.46 and at an altitude of 66,000 feet (20,000 m). The surface of the vehicle is colored by the pressure coefficient, and the gray contours represent the density of the surrounding air, as calculated using the OVERFLOW codes.
72.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, the properties of space. A mathematician who works in the field of geometry is called a geometer. Geometry arose independently in a number of early cultures for dealing with lengths, areas, volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd BC, geometry was put into an axiomatic form by Euclid, whose treatment, Euclid's Elements, set a standard for many centuries to follow. Geometry arose independently with texts providing rules for geometric constructions appearing as early as the 3rd century BC. Islamic scientists expanded on them during the Middle Ages. By the 17th century, geometry had been put on a solid analytic footing by mathematicians such as René Descartes and Pierre de Fermat. Since then, into modern times, geometry has expanded into non-Euclidean geometry and manifolds, describing spaces that lie beyond the normal range of human experience. While geometry has evolved significantly throughout the years, there are some general concepts that are less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, curves, as well as the more advanced notions of manifolds and topology or metric. Contemporary geometry has many subfields: Euclidean geometry is geometry in its classical sense. The educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, analytic geometry. Euclidean geometry also has applications in computer science, various branches of modern mathematics. Differential geometry uses techniques of linear algebra to study problems in geometry.
Geometry
–
Visual checking of the Pythagorean theorem for the (3, 4, 5) triangle as in the Chou Pei Suan Ching 500–200 BC.
Geometry
–
An illustration of Desargues' theorem, an important result in Euclidean and projective geometry
Geometry
–
Geometry lessons in the 20th century
Geometry
–
A European and an Arab practicing geometry in the 15th century.
73.
CAD
–
Computer-aided design is the use of computer systems to aid in the creation, modification, analysis, or optimization of a design. CAD output is often in the form of electronic files for print, other manufacturing operations. The CADD is also used. Its use in designing electronic systems is known as EDA. However, it involves more than just shapes. CAD may be used to design curves and figures in two-dimensional space; or curves, solids in three-dimensional space. CAD is an industrial art extensively used in many applications, including automotive, shipbuilding, aerospace industries, industrial and architectural design, prosthetics, many more. CAD is also widely used to produce computer animation for special effects in movies, advertising and technical manuals, often called DCC digital content creation. The modern power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by engineers of the 1960s. Because of its economic importance, CAD has been a major driving force for research in computational geometry, computer graphics, discrete differential geometry. The design of geometric models for object shapes, in particular, is occasionally called geometric design. Eventually CAD provided the designer with the ability to perform engineering calculations. During this transition, calculations were still performed either by those individuals who could run computer programs. CAD was a revolutionary change in the industry, where draftsmen, designers and engineering roles begin to merge. It did not eliminate departments, much as it merged departments and empowered draftsman, designers and engineers.
CAD
CAD
CAD
74.
Volume
–
Volume is the quantity of three-dimensional space enclosed by a closed surface, for example, the space that a substance or shape occupies or contains. Volume is often quantified numerically using the cubic metre. Three mathematical shapes are also assigned volumes. Circular shapes can be easily calculated using arithmetic formulas. Volumes of a complicated shape can be calculated by integral calculus if a formula exists for the shape's boundary. Two-dimensional shapes are assigned zero volume in the three-dimensional space. The volume of a solid can be determined by fluid displacement. Displacement of liquid can also be used to determine the volume of a gas. The combined volume of two substances is usually greater than the volume of one of the substances. However, sometimes one substance dissolves in the combined volume is not additive. In geometry, volume is expressed by means of the volume form, is an important global Riemannian invariant. In thermodynamics, volume is a conjugate variable to pressure. Any unit of length gives a corresponding unit of volume: the volume of a cube whose sides have the given length. For example, a cubic centimetre is the volume of a cube whose sides are one centimetre in length. In the International System of Units, the standard unit of volume is the cubic metre.
Volume
–
A measuring cup can be used to measure volumes of liquids. This cup measures volume in units of cups, fluid ounces, and millilitres.
75.
Enthalpy
–
Enthalpy /ˈɛnθəlpi/ is a measurement of energy in a thermodynamic system. Enthalpy is the thermodynamic equivalent to the total heat content of a system. Enthalpy is equal to the internal energy of the system plus the product of volume. It is defined as a function that depends only on the prevailing equilibrium state identified by the variables internal energy, pressure, volume. Enthalpy is an extensive quantity. At constant pressure, the enthalpy change equals the energy transferred through heating or work other than expansion work. H, of a system can not be measured directly. The same situation exists in classical mechanics: only a difference in energy carries physical meaning. The ΔH is negative in heat-releasing exothermic processes. This means that the change in enthalpy under such conditions is the heat absorbed by the material by external heat transfer. Enthalpies for chemical substances at constant pressure assume standard state: most commonly 1 bar pressure. Standard state does not, strictly speaking, specify a temperature, but expressions for enthalpy generally reference the standard heat of formation at 25 °C. Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy and Gibbs energy. Real materials at common temperatures and pressures closely approximate this behavior, which greatly simplifies enthalpy calculation and use in practical designs and analyses. The word enthalpy stems from the Greek verb enthalpein, which means "to warm in".
Enthalpy
Enthalpy
–
Fig.1 During steady, continuous operation, an energy balance applied to an open system equates shaft work performed by the system to heat added plus net enthalpy added
Enthalpy
–
Fig.3 Two open systems in the steady state. Fluid enters the system (dotted rectangle) at point 1 and leaves it at point 2. The mass flow is. a: schematic diagram of the throttling process. b: schematic diagram of a compressor. A power P is applied and a heat flow is released to the surroundings at ambient temperature T a.
76.
Random-access memory
–
Random-access memory is a form of computer data storage which stores frequently used program instructions to increase the general speed of a system. RAM contains to connect the data lines to the addressed storage for reading or writing the entry. In today's technology, random-access memory takes the form of integrated circuits. A type of flash memory called NOR-Flash. Integrated-circuit RAM chips came into the market with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970. Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order it was written. Latches later, out of discrete transistors, were used for smaller and faster memories such as registers. The practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode tube. Since the beam of the CRT could read and write the spots on the tube in any order, memory was random access. In fact, rather than the Williams memory being designed for the SSEM, the SSEM was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring.
Random-access memory
–
Example of writable volatile random-access memory: Synchronous Dynamic RAM modules, primarily used as main memory in personal computers, workstations, and servers.
Random-access memory
–
These IBM tabulating machines from the 1930s used mechanical counters to store information
Random-access memory
–
A portion of a core memory with a modern flash RAM SD card on top
Random-access memory
–
1 Megabit chip – one of the last models developed by VEB Carl Zeiss Jena in 1989
77.
Reynolds number
–
The Reynolds number is an important dimensionless quantity in fluid mechanics, used to help predict flow patterns in different fluid flow situations. It is widely used in many applications ranging to the passage of air over an aircraft wing. A similar effect is created by the introduction of a stream such as the hot gases from a flame in air. This relative movement generates fluid friction, a factor in developing turbulent flow. The application of Reynolds numbers to both situations allows scaling factors to be developed. The Reynolds number can be defined for different situations where a fluid is in relative motion to a surface. These definitions generally include a velocity and a characteristic length or characteristic dimension. For aircraft or ships, the width can be used. For flow in a sphere moving in a fluid the internal diameter is generally used today. Other shapes such as non-spherical objects have an equivalent diameter defined. For fluids of variable density such as compressible fluids of variable viscosity such as non-Newtonian fluids, special rules apply. The velocity may also be a matter of convention in some circumstances, notably stirred vessels. In practice, matching the Reynolds number is not on its own sufficient to guarantee similitude. Very small changes to shape and surface roughness can result in very different flows. Nevertheless, Reynolds numbers are widely used.
Reynolds number
–
Sir George Stokes, introduced Reynolds numbers
Reynolds number
–
Osborne Reynolds popularised the concept
Reynolds number
–
The Moody diagram, which describes the Darcy–Weisbach friction factor f as a function of the Reynolds number and relative pipe roughness.
78.
Discretization
–
In mathematics, discretization concerns the process of transferring continuous functions, models, equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Processing on a digital computer requires another process called quantization. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable. Euler–Maruyama method Zero-order hold Discretization is also related to discrete mathematics, is an important component of granular computing. In this context, discretization may also refer to modification of granularity, as when discrete variables are aggregated or multiple discrete categories fused. Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand. Discretization is also concerned with the transformation of continuous differential equations into discrete difference equations, suitable for numerical computing. It can, however, be computed by first constructing a matrix, computing the exponential of it: F = T G = e F =. Now we want to discretise the above expression. We assume that u is constant during each timestep. Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timesteps e A T ≈ I + A T. Each of them have different stability properties.
Discretization
–
A solution to a discretized partial differential equation, obtained with the finite element method.
79.
High-resolution scheme
–
High-resolution schemes are used in the numerical solution of partial differential equations where high accuracy is required in the presence of shocks or discontinuities. They have the following properties: higher order accuracy is obtained in smooth parts of the solution. Solutions are free from spurious oscillations or wiggles. High accuracy is obtained around shocks and discontinuities. The number of mesh points containing the wave is small compared with a first-order scheme with similar accuracy. To avoid spurious or non-physical oscillations where shocks are present, schemes that exhibit a Total Variation Diminishing characteristic are especially attractive. Two techniques that are proving to be particularly effective are MUSCL a flux/slope limiter method and the WENO method. Both methods are usually referred to as high resolution schemes. MUSCL methods are generally second-order accurate in smooth regions and provide good resolution, monotonic solutions around discontinuities. They are straight-forward to implement and are computationally efficient. For problems comprising both complex smooth structure, WENO schemes can provide higher accuracy than second-order schemes along with good resolution around discontinuities. Godunov's theorem Sergei K. Godunov Total variation diminishing Shock capturing methods Godunov, Sergei K. Ph.D. Dissertation: Different Methods for Shock Waves, Moscow State University. Godunov, Sergei K. A Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, Math. 271-306, translated US Joint Publ.
High-resolution scheme
–
Typical high-resolution scheme based on MUSCL reconstruction.
80.
Flux limiters
–
Use of flux limiters, together with an high scheme, make the solutions total variation diminishing. They are used in high resolution schemes for solving problems described by PDEs and only come into operation when sharp wave fronts are present. For smoothly changing waves, the spatial derivatives can be represented without introducing spurious oscillations. The function is constrained to be greater than or i.e. ϕ ≥ 0. Therefore, when the limiter is equal to zero, the flux is represented by a low resolution scheme. Similarly, when the limiter is equal to 1, it is represented by a high resolution scheme. The various limiters are selected according to the particular problem and scheme. A particular choice is usually made on a trial and basis. HQUICK ϕ h q = 2; lim r → ∞ ϕ h q = 4. Koren -- accurate for sufficiently smooth data ϕ n = max; lim r → ∞ ϕ k n = 2. Minmod – symmetric ϕ m m = max; lim r → ∞ ϕ m m = 1. Monotonized central – symmetric ϕ m c = max; lim r → ∞ ϕ m c = 2. Osher ϕ o s = max,; lim r → ∞ ϕ o s = β. Ospre – symmetric ϕ o p = 1.5; lim r → ∞ ϕ o p = 1.5. Smart ϕ s m = max; lim r → ∞ ϕ s m = 4.
Flux limiters
–
Admissible limiter region for second-order TVD schemes.
81.
Total variation diminishing
–
In numerical methods, total variation diminishing is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of this method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten. A numerical method is said to be total variation diminishing if, T V ≤ T V. Harten 1983 proved the following properties for a numerical scheme, A monotone scheme is TVD, A TVD scheme is monotonicity preserving. In Computational Fluid Dynamics, TVD scheme is employed to capture sharper shock predictions without any misleading oscillations when variation of variable "Ø" is discontinuous. The computation becomes heavy and therefore uneconomic. The use of coarse grids with central difference scheme, upwind scheme, power law scheme gives false shock predictions. As the scheme preserves monotonicity there are no spurious oscillations in the solution. P = F D = ρ δ x Γ. So, f r + is a function of ϕ P − ϕ L ϕ R − ϕ L. Monotone schemes are attractive for solving engineering and scientific problems because they do not produce non-physical solutions. Godunov's theorem proves that linear schemes which preserve monotonicity are, at most, only first order accurate. Linear schemes, although more accurate for smooth solutions, are not TVD and tend to introduce spurious oscillations where discontinuities or shocks arise. To overcome these drawbacks, various non-linear techniques have been developed, often using flux/slope limiters.
Total variation diminishing
–
A picture showing the control volume with velocities at the faces,nodes and the distance between them, where 'P' is the node at the center.
82.
Nonlinear system
–
In physical sciences, a nonlinear system is a system in which the output is not directly proportional to the input. Nonlinear problems are to many other scientists because most systems are inherently nonlinear in nature. Nonlinear systems may appear counterintuitive, contrasting with the much simpler linear systems. It does not matter if nonlinear known functions appear in the equations. As nonlinear equations are difficult to solve, nonlinear systems are commonly approximated by linear equations. It follows that some aspects of the behavior of a system appear commonly to be counterintuitive, even chaotic. Although chaotic behavior may resemble random behavior, it is not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology. Some authors use the science for the study of nonlinear systems. This is disputed by others: Using a term like science is like referring as the study of non-elephant animals. Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The equation is called homogeneous if C = 0.
Nonlinear system
–
Linearizations of a pendulum
83.
Ludwig Prandtl
–
Ludwig Prandtl was a German engineer. In the 1920s Prandtl developed the mathematical basis for the fundamental principles of subsonic aerodynamics in particular; and including transonic velocities. His studies identified the boundary layer, lifting-line theories. The Prandtl number was named after him. He was born in Freising, in 1875. As a result, Ludwig spent more time with his father, a professor of engineering. His father also encouraged him to think about his observations. Prandtl graduated with a Ph.D. under guidance of Professor August Foeppl in six years. His first job was as an engineer designing factory equipment. There, Prandtl entered the field of fluid mechanics where he had to design a device. After carrying out some experiments, Prandtl came up with a new device that used less power than the one it replaced. In 1901 he became a professor of fluid mechanics at the technical school in Hannover, now the Technical University Hannover. It was here that he developed many of his most important theories. The paper also described separation as a result of the boundary layer, clearly explaining the concept of stall for the first time. In the end the approximation contained in his original paper remains in widespread use.
Ludwig Prandtl
–
Ludwig Prandtl
Ludwig Prandtl
–
Ludwig Prandtl 1904 with his fluid test channel
84.
Large eddy simulation
–
Large eddy simulation is a mathematical model for turbulence used in computational fluid dynamics. It was initially proposed by Joseph Smagorinsky to simulate atmospheric air currents, first explored by Deardorff. LES is currently applied in a wide variety including combustion, acoustics, simulations of the atmospheric boundary layer. Such a low-pass filtering, which can be viewed as a time- and spatial-averaging, effectively removes small-scale information from the numerical solution. An LES filter can perform a spatial filtering operation, a temporal filtering operation, or both. This can also be written as: ϕ ¯ = G ⋆ ϕ. The filter kernel G has an associated cutoff length scale Δ and cutoff time scale τ c. Scales smaller than these are eliminated from ϕ ¯. Using the above definition, any field ϕ may be split up into a filtered and sub-filtered portion, as ϕ = ϕ ¯ + ϕ ′. It is important to note that the eddy simulation filtering operation does not satisfy the properties of a Reynolds operator. The governing equations of LES are obtained by filtering the partial differential equations governing the flow field u. There are differences between the compressible LES governing equations, which lead to the definition of a new filtering operation. The nonlinear filtered term u i u j ¯ is the chief cause of difficulty in LES modeling. It requires knowledge of the unfiltered field, unknown, so it must be modeled. The analysis that follows illustrates the difficulty caused by the nonlinearity, namely, that it causes interaction between small scales, preventing separation of scales.
Large eddy simulation
–
Large eddy simulation of a turbulent gas velocity field.
85.
Wavelet
–
A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, then decreases back to zero. It can typically be visualized as a "brief oscillation" like one recorded by a seismograph or monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for processing. For example, a wavelet could be created to have a short duration of roughly a 32nd note. Mathematically, the wavelet will correlate with the signal if the unknown signal contains information of similar frequency. This concept of correlation is at the core of practical applications of wavelet theory. Sets of wavelets are generally needed to analyze data fully. A set of "complementary" wavelets overlap so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based compression/decompression algorithms where it is desirable to recover the original information with minimal loss. This is accomplished through coherent states. The wavelet has been used for decades in digital signal processing and exploration geophysics. The French word ondelette meaning "small wave" was used by Morlet and Grossmann in the early 1980s. Wavelet theory is applicable to several subjects. All wavelet transforms so are related to harmonic analysis. Almost all practically useful discrete wavelet transforms use discrete-time filterbanks.
Wavelet
–
Seismic wavelet
Wavelet
–
Meyer
86.
Vortex stretching
–
Vortex stretching is associated with a particular term in the vorticity equation. For example, vorticity transport in an incompressible inviscid flow is governed by D ω → D t = v →, where D/Dt is the convective derivative. The term on the right side is the vortex stretching term. It amplifies the vorticity → when the velocity is diverging in the parallel to ω →. A simple example of vortex stretching in a viscous flow is provided by the Burgers vortex. Vortex stretching is at the core of the description of the turbulence energy cascade from the large scales to the small scales in turbulence. In turbulence fluid elements are more lengthened on average. In the end, this results in more vortex stretching than vortex squeezing. For incompressible flow -- due to conservation of fluid elements -- the lengthening implies thinning to the stretching direction. This reduces the radial length scale of the associated vorticity. Finally, at the small scales of the order of the Kolmogorov microscales, the turbulence kinetic energy is dissipated into heat through the action of molecular viscosity. Chorin, A.J. Vorticity and turbulence, Springer, ISBN 0-387-94197-5 Tennekes, H.; Lumley, J.L. A First Course in Turbulence, Cambridge, MA: MIT Press, ISBN 0-262-20019-8
Vortex stretching
–
Studies of vortices in turbulent fluid motion by Leonardo da Vinci.
87.
Probability density function
–
The probability density function is nonnegative everywhere, its integral over the entire space is equal to one. The terms "function" and "function" have sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. Further confusion of terminology exists because density function has also been used for what is here called the "probability mass function". In general though, the PMF is used in the context of discrete random variables, while PDF is used in the context of continuous random variables. Suppose a species of bacteria typically lives 4 to 6 hours. What is the probability that a bacterium lives exactly 5 hours? The answer is 0%. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.0000000000... hours. Instead we might ask: What is the probability that the bacterium dies between 5 hours and 5.01 hours? Let's say the answer is 0.02. Next: What is the probability that the bacterium dies between 5 hours and 5.001 hours? The answer is probably around 0.002, since this is 1/10th of the previous interval. The probability that the bacterium dies between 5 hours and 5.0001 hours is probably about 0.0002, so on. In these three examples, the ratio / is approximately constant, equal to 2 per hour.
Probability density function
–
Boxplot and probability density function of a normal distribution N (0, σ 2).
88.
Kinetic theory of gases
–
Kinetic theory explains macroscopic properties of gases, such as pressure, temperature, viscosity, thermal conductivity, volume, by considering their molecular composition and motion. The theory posits that pressure is due to the impacts, on the walls of atoms moving at different velocities. Kinetic theory defines temperature in its own way, not identical with the thermodynamic definition. Known as Brownian motion, it results directly from collisions between the grains or particles and liquid molecules. The theory for ideal gases makes the following assumptions: The gas consists of very small particles known as molecules. This is equivalent to stating that the average distance separating the gas particles is large compared to their size. These particles have the same mass. The number of molecules is so large that statistical treatment can be applied. These molecules are in constant, random, rapid motion. The rapidly moving particles constantly collide among themselves and with the walls of the container. All these collisions are perfectly elastic. This means, the molecules are considered to be perfectly spherical in shape, elastic in nature. Except during collisions, the interactions among molecules are negligible. This implies: 1. Relativistic effects are negligible.
Kinetic theory of gases
–
Hydrodynamica front cover
Kinetic theory of gases
–
The temperature of an ideal monatomic gas is proportional to the average kinetic energy of its atoms. The size of helium atoms relative to their spacing is shown to scale under 1950 atmospheres of pressure. The atoms have a certain, average speed, slowed down here two trillion fold from room temperature.
89.
Langevin equation
–
In statistical physics, a Langevin equation is a stochastic differential equation describing the time evolution of a subset of the degrees of freedom. These degrees of freedom typically are collective variables changing slowly in comparison to the other variables of the system. The fast variables are responsible for the stochastic nature of the Langevin equation. The degree of freedom of interest here is the position x of m denotes the particle's mass. This is an approximation; the random force has a nonzero correlation time corresponding to the collision time of the molecules. The Langevin equation as it stands requires an interpretation in this case, see Itō calculus. There is a formal derivation of a generic Langevin equation from classical mechanics. This generic equation plays other areas of nonequilibrium statistical mechanics. The equation for Brownian motion above is a special case. An essential condition of the derivation is a criterion dividing the degrees of freedom into the categories fast. For example, thermodynamic equilibrium in a liquid is reached within a few collision times. But it takes longer for densities of conserved quantities like mass and energy to relax to equilibrium. Densities of conserved quantities, in particular their long wavelength components, thus are variable candidates. This division is realized with the Zwanzig projection operator, the essential tool in the derivation. The derivation is not completely rigorous because it relies on assumptions akin to assumptions required elsewhere in statistical mechanics.
Langevin equation
–
Phase portrait of a harmonic oscillator showing spreading due to the Langevin Equation.
90.
N-body problem
–
In physics, the n-body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem has been motivated by the desire to understand the motions of the Sun, Moon, the visible stars. In the 20th century, understanding the dynamics of globular star systems became an important n-body problem. The n-body problem in general relativity is considerably more difficult to solve. To this purpose the two-body problem is discussed below; as is the famous restricted 3-Body Problem. Newton realized it was amongst all the planets was affecting all their orbits. Thus came the awareness and rise of the n-body "problem" in the early 17th century. Ironically, this conformity led to the wrong approach. After Newton's time the n-body problem historically was not stated correctly because it did not include a reference to those interactive forces. Newton implies in his Principia the n-body problem is unsolvable because of those gravitational interactive forces. Newton said in paragraph 21: And hence it is that the attractive force is found in both bodies. The Sun attracts Jupiter and the other planets, similarly the satellites act on one another. Two bodies can be drawn by the contraction of rope between them. This last statement, which implies the existence of interactive forces, is key. The problem of finding the general solution of the n-body problem was considered very challenging.
N-body problem
–
The Real Motion v.s. Kepler's Apparent Motion
N-body problem
–
Restricted 3-Body Problem
91.
Volume of fluid method
–
In computational fluid dynamics, the'volume of fluid method' is a free-surface modelling technique, i.e. a numerical technique for tracking and locating the free surface. The Navier–Stokes equations describing the motion of the flow have to be solved separately. The same applies for all other advection algorithms. The volume of fluid method is based on earlier Marker-and-cell methods. Since VOF method surpassed MAC by lowering storage requirements, it quickly became popular. Early applications include Torrey et al. from Los Alamos, who created VOF codes for NASA. First implementations of VOF suffered from interface description, later remedied by introducing a Piecewise-Linear Interface Calculation scheme. Using VOF with PLIC is a contemporary standard, used in number of computer codes, such as FLOW-3D, Gerris, ANSYS Fluent and STAR-CCM. The method is based on the idea of a so-called function C. It is a function, defined as the integral of a fluid's characteristic function in the control volume, namely the volume of a computational grid cell. The fraction of each fluid is tracked through every cell in the computational grid, while all fluids share a single set of momentum equations. C is a discontinuous function, its value jumps from 0 to 1 when the argument moves into interior of traced phase. The normal direction of the fluid interface is found where the value of C changes most rapidly. With this method, the free-surface is not defined sharply, instead it is distributed over the height of a cell. Thus, in order to attain accurate results, local grid refinements have to be done.
Volume of fluid method
–
An illustration of fluid simulation using VOF method.
92.
Two-phase flow
–
In fluid mechanics, two-phase flow is a flow of gas and liquid usually in a pipe. Two-phase flow is a particular example of multiphase flow. Two-phase flow can occur in various forms. The widely-accepted method to categorize two-phase flows is to consider the velocity of each phase as if there is not other phases available. The parameter is a hypothetical concept called Superficial velocity. Probably the most commonly studied cases of two-phase flow are in large-scale power systems. Coal and gas-fired power stations used very large boilers to produce steam for use in turbines. In such cases, it changes to steam as it moves through the pipe. The design of boilers requires a detailed understanding of two-phase flow heat-transfer and pressure behaviour, significantly different from the single-phase case. Even more critically, nuclear reactors use water to remove heat from the core using two-phase flow. Another case where two-phase flow can occur is in cavitation. Here a pump is operating close to the pressure of the fluid being pumped. Similar effects can also occur on marine propellors; wherever it occurs, it is a serious problem for designers. When the bubble collapses, it can produce very large pressure spikes, which over time will cause damage on the propellor or turbine. The two-phase flow cases are for a single fluid occurring by itself as two different phases, such as steam and water.
Two-phase flow
–
Different modes of two-phase flows.
93.
Level set method
–
Level set methods are a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. The figure on the right illustrates important ideas about the level set method. In the upper-left corner we see a shape;, a bounded region with a well-behaved boundary. The flat blue region represents the xy-plane. In the top row we see the shape changing its topology by splitting in two. It would be quite hard to describe this transformation numerically by following its evolution. One would need an algorithm able to detect the moment the shape splits in two, then construct parameterizations for the two newly obtained curves. On the other hand, if we look at the bottom row, we see that the set function merely translated downward. Thus, in the level set method amounts to representing a closed curve Γ using an auxiliary function φ, called the level set function. The level set method manipulates Γ implicitly, through the function φ. This φ is assumed to take positive values inside the region delimited by the curve Γ and negative values outside. This is a partial equation, in particular a Hamilton -- Jacobi equation, can be solved numerically, for example by using finite differences on a Cartesian grid. The numerical solution of the level equation, however, requires sophisticated techniques. Finite difference methods fail quickly. Instead, the level set may vanish over several time steps.
Level set method
–
An illustration of the level set method
Level set method
94.
Ordinary differential equations
–
In mathematics, an ordinary differential equation is a differential equation containing one or more functions of one independent variable and its derivatives. The term ordinary is used with the term partial differential equation which may be with respect to more than one independent variable. ODEs that are linear differential equations have closed-form solutions that can be added and multiplied by coefficients. Ordinary differential equations arise in many contexts of science. Mathematical descriptions of change use derivatives. Often, quantities are defined as the rate of gradients of quantities, how they enter differential equations. Mathematical fields include geometry and analytical mechanics. Scientific fields include astronomy, meteorology, chemistry, biology, ecology and population modelling, economics. Many mathematicians have contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, Euler. In general, F is a function of the x of the particle at time t. The unknown function x is indicated in the notation F. In what follows, let y be x an independent variable, y = f is an unknown function of x. The notation for differentiation varies depending upon which notation is most useful for the task at hand. Given F, a function of x, y, derivatives of y. Then an equation of the F = y is called an explicit ordinary differential equation of order n.
Ordinary differential equations
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
95.
Newton's method
–
X: f = 0. Geometrically, is the tangent of the graph of f at. The process is repeated as x n 1 = x n − f f ′ until a sufficiently accurate value is reached. This algorithm is first in the class of Householder's methods, succeeded by Halley's method. The method can also be extended to systems of equations. The method can be iterated. Suppose ƒ: → R is a differentiable function defined on the interval with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current xn. Then we can derive the formula for xn +1 by referring to the diagram on the right. The x-intercept of this line is then used to the root, xn +1. In other words, setting y to zero and x to xn+1 gives 0 = f ′ + f. Solving for xn+1 gives x n + 1 = x n − f f ′. We start the process off with some arbitrary initial value x0. The method will usually converge, provided this initial guess is close enough to that ƒ' ≠ 0.
Newton's method
–
The tangent lines of x 3 - 2 x + 2 at 0 and 1 intersect the x -axis at 1 and 0 respectively, illustrating why Newton's method oscillates between these values for some starting points.
Newton's method
–
The function ƒ is shown in blue and the tangent line is in red. We see that x n +1 is a better approximation than x n for the root x of the function f.
96.
Fixed point iteration
–
In numerical analysis, fixed-point iteration is a method of computing fixed points of iterated functions. If f is continuous, then one can prove that the obtained x is a fixed point of f, f = x. More generally, the f can be defined on any metric space with values in that same space. This is a special case of Newton's method quoted below. This example does satisfy the assumptions of the Banach fixed theorem. The Banach fixed-point theorem allows one to obtain fixed-point iterations with linear convergence. The fixed-point iteration x n 1 = 2 x n will diverge unless x 0 = 0. We say that the fixed point of f = 2 x is repelling. The requirement that f is continuous is important, as the following example shows. Newton's method for finding roots of a given differentiable function f is x n + 1 = x n − f f ′. If we write g = x − f f ′, we may rewrite the Newton iteration as the fixed-point iteration x n 1 = g. The reciprocal of anything is nonzero, therefore f = 0: x is a root of f. Under the assumptions of the Banach fixed point theorem, the Newton iteration, framed as the fixed point method, demonstrates linear convergence. However, a more detailed analysis shows quadratic convergence, i.e. | x n − x | < C q 2 n, under certain circumstances. When it works correctly, its error is | x n − x | < C q 3 n.
Fixed point iteration
–
The fixed-point iteration x n +1 = sin x n with initial value x 0 = 2 converges to 0. This example does not satisfy the assumptions of the Banach fixed point theorem and so its speed of convergence is very slow.
97.
Preconditioner
–
Preconditioning is typically related to reducing a number of the problem. The preconditioned problem is then usually solved by an iterative method. It is also common to call − 1 the preconditioner, rather than P, since P itself is rarely explicitly available. Preconditioned iterative solvers typically outperform e.g. Gaussian elimination, for large, especially for sparse, matrices. The left preconditioning is more common. The preconditioned matrix P A P − 1 is almost never explicitly formed. Only the action of applying the preconditioner solve operation P − 1 to a given vector need to be computed in iterative methods. Typically there is a trade-off in the choice of P. The cheapest preconditioner would therefore be P = I since then P − 1 = I. Clearly, this results in the original linear system and the preconditioner does nothing. Some examples of typical preconditioning approaches are detailed below. Examples of preconditioned iterative methods for linear systems include the preconditioned conjugate gradient method, the biconjugate gradient method, generalized minimal residual method. For a symmetric definite matrix A the preconditioner P is typically chosen to be symmetric positive definite as well. The preconditioned operator P − 1 A is then also symmetric positive definite, but to the P - based scalar product. Denoting T = P − 1, we highlight that preconditioning is practically implemented as multiplying some vector r by T, i.e. computing the product T r.
Preconditioner
–
Illustration of gradient descent
98.
Aorta
–
The aorta distributes oxygenated blood through the systemic circulation. In anatomical sources, the aorta is usually divided into sections. One way of classifying a part of the aorta is by anatomical compartment, where the thoracic aorta runs to the diaphragm. The aorta then continues downward as the abdominal diaphragm to the aortic bifurcation. Another system divides the aorta with the direction of blood flow. Following the aortic arch, the aorta then travels inferiorly as the descending aorta. The descending aorta has two parts. The aorta begins to descend in the thoracic cavity, consequently is known as the thoracic aorta. After the aorta passes through the diaphragm, it is known as the abdominal aorta. The aorta ends by dividing into two major blood vessels, a smaller midline vessel, the median sacral artery. The ascending aorta begins at the opening of the aortic valve in the left ventricle of the heart. It runs through a pericardial sheath with the pulmonary trunk. The transition from ascending aorta to aortic arch is at the pericardial reflection on the aorta. The right aortic sinus likewise gives rise to the right coronary artery. Together, these two arteries supply the heart.
Aorta
–
A pig's aorta cut open showing also some leaving arteries.
Aorta
–
Course of the aorta in the thorax (anterior view), starting posterior to the main pulmonary artery, then anterior to the right pulmonary arteries, the trachea and the esophagus, then turning posteriorly to course dorsally to these structures.
Aorta
–
Major Aorta anatomy displaying Ascending Aorta, Brachiocephalic trunk, Left Common Carotid Artery, Left Subclavian Artery, Aortic Isthmus, Aortic Arch and Descending Thoracic Aorta
99.
Blade element theory
–
Blade element theory is a mathematical process originally designed by William Froude, David W. Taylor and Stefan Drzewiecki to determine the behavior of propellers. It involves breaking a blade down into small parts then determining the forces on each of these small blade elements. One of the key difficulties lies in modelling the induced velocity on the disk. At the most basic level of approximation a induced velocity on the disk is assumed: v i = T A ⋅ 1 2 ρ. This approach is sometimes called the Froude-Finsterwalder equation. The most simple forward inflow models are first harmonic models. TU Berlin Review paper on forward flight inflow models by Robert Chen, NASA
Blade element theory
–
Velocities and forces for a blade element
100.
Central differencing scheme
–
In applied mathematics, the central differencing scheme is a finite difference method. The finite method optimizes the approximation for the differential operator in the central node of the considered patch and provides numerical solutions to differential equations. The right side of the convection-diffusion equation which basically highlights the diffusion terms can be represented using central difference approximation. This equation represents balance in a control volume. In the absence of source equation one becomes d d x = d d x → Equation 2. Continuity equation: d d x = 0 → Equation 3. But central scheme does not possess Transportiveness at higher pe since Φ at a point is average of neighbouring nodes for all Pe. The Taylor series error of the central differencing scheme is second order. Central scheme will be accurate only if Pe < 2. Owing to this limitation central differencing is not a suitable practice for general purpose flow calculations. Central type schemes are currently being applied on a regular basis in the solution of the Euler equations and Navier -- Stokes equations. The results using central approximation have demonstrated noticeable improvements in accuracy in smooth regions. The central difference schemes have a free parameter in conjunction with the fourth-difference dissipation. This dissipation is needed to approach a steady state. This scheme is more accurate than the first order upwind scheme if Peclet number is less than 2.
Central differencing scheme
–
Figure 1.Comparison of different schemes
101.
Finite element analysis
–
The finite element method is a numerical technique for finding approximate solutions to boundary value problems for partial differential equations. It is also referred to as finite element analysis. It subdivides a large problem into smaller, simpler parts that are called finite elements. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function. The global system of equations has known solution techniques, can be calculated from the initial values of the original problem to obtain a numerical answer. To explain the approximation in this process, FEM is commonly introduced as a special case of Galerkin method. In simple terms, it is a procedure that minimizes the error of approximation by fitting trial functions into the PDE. The residual is the error caused by the trial functions, the weight functions are polynomial approximation functions that project the residual. These equation sets are the element equations. They are linear if the underlying PDE is linear, vice versa. This spatial transformation includes appropriate orientation adjustments as applied in relation to the reference coordinate system. The process is often carried out by FEM software using coordinate data generated from the subdomains. FEM is best understood from its practical application, known as finite element analysis. FEA as applied in engineering is a computational tool for performing engineering analysis.
Finite element analysis
–
Visualization of how a car deforms in an asymmetrical crash using finite element analysis. [1]
Finite element analysis
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
102.
KIVA (software)
–
KIVA is a family of Fortran-based Computational Fluid Dynamics software developed by Los Alamos National Laboratory. The software predicts complex fuel and air flows as well as ignition, pollutant-formation processes in engines. General Motors has used KIVA in the development of stratified charge gasoline engines as well as the fast burn, homogeneous-charge gasoline engine. At the same time, the company realized a more robust design and improved economy while meeting all environmental and customer constraints. LANL's Computational Fluid Dynamics expertise hails in the 1940s. KIVA is used by hundreds including the Big Three U.S. auto makers, Cummins, Caterpillar, various federal laboratories. They also create more difficulty in controlling the combustion process. Poorly incomplete combustion can cause higher levels of emissions and lower engine efficiencies. In order to optimize combustion processes, engine designers have traditionally undertaken manual engine modifications, analyzed the results. This iterative process does not lend itself to identifying the optimal engine design specifications. A transient, three-dimensional, multiphase, multicomponent code for the analysis of chemically reacting flows with sprays has been under development at LANL for decades. The code discretizes space using the finite volume method. The code uses an implicit time-advancement with the exception of the advective terms that are cast in an second-order monotonicity-preserving manner. Also, the convection calculations can be subcycled in the desired regions to avoid restricting the step due to Courant conditions. KIVA's functionality extends to supersonic flows for both laminar and turbulent regimes.
KIVA (software)
–
KIVA simulation of an experimental engine with DOHC quasi-symmetric pent-roof combustion chamber and 4 valves.
103.
Visualization (graphic)
–
Visualization or visualisation is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. Examples from history include Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. Today has ever-expanding applications in science, education, interactive medicine, etc.. Typical of a visualization application is the field of computer graphics. The invention of computer graphics may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation also helped advance visualization. The use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, data plots for over a thousand years. Examples from cartography include Ptolemy's Geographia, a map of China, Minard's map of Napoleon's invasion of Russia a century and a half ago. Most of the concepts learned in devising these images carry over in a straightforward manner to computer visualization. Edward Tufte has written three critically acclaimed books that explain many of these principles. Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in a special issue of Computer Graphics.
Visualization (graphic)
–
Visualization of how a car deforms in an asymmetrical crash using finite element analysis.
Visualization (graphic)
–
The Ptolemy world map, reconstituted from Ptolemy's Geographia (circa 150), indicating the countries of " Serica " and "Sinae" (China) at the extreme right, beyond the island of "Taprobane" (Sri Lanka, oversized) and the "Aurea Chersonesus" (Southeast Asian peninsula).
Visualization (graphic)
–
A scientific visualization of a simulation of a Raleigh-Taylor instability caused by two mixing fluids.
104.
Shape optimization
–
Shape optimization is part of the field of optimal control theory. The typical problem is to find the shape, optimal in that it minimizes a certain cost functional while satisfying given constraints. In many cases, the functional being solved depends on the solution of a given partial equation defined on the variable domain. Topology optimization is, in addition, concerned with the number of connected components/boundaries belonging to the domain. Topological optimization techniques can then help work around the limitations of pure optimization. Sometimes additional constraints need to be imposed to that end to ensure uniqueness of the solution. Shape optimization is an infinite-dimensional problem. Shape optimization problems are usually solved numerically, by using iterative methods. That is, one then gradually evolves it, until it morphs into the optimal shape. To solve a shape problem, one needs to find a way to represent a shape in the computer memory, follow its evolution. Several approaches are usually used. One approach is to follow the boundary of the shape. Then, one can evolve the shape by gradually moving the boundary points. This is called the Lagrangian approach. One can then evolve itself.
Shape optimization
–
Example: Shape optimization as applied to building geometry. Example provided courtesy of Formsolver.com
Shape optimization
–
Example: Optimization shape families resulting from differing goal parameters. Example provided courtesy of Formsolver.com
105.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each variation of a book. For example, an e-book, a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned after 1 January 2007, 10 digits long if assigned before 2007. The method of assigning an ISBN varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated based upon the 9-digit Standard Book Numbering created in 1966. The 10-digit ISBN format was published in 1970 as international standard ISO 2108. The International Standard Serial Number, identifies periodical publications such as magazines; and the International Standard Music Number covers for musical scores. The ISBN configuration of recognition was generated in 1967 in the United Kingdom by Emery Koltay. The 10-digit ISBN format was published as international standard ISO 2108. The United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978. An SBN may be converted by prefixing the digit "0". This can be converted to ISBN 0-340-01381-8; the digit does not need to be re-calculated. Since 1 ISBNs have contained 13 digits, a format, compatible with "Bookland" European Article Number EAN-13s.
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
106.
Physics of Fluids
–
The journal contains original research resulting from theoretical, computational and experimental studies. From 1958 through 1988, the journal included plasma physics. Until 1993 the journal split into Physics of Fluids A covering fluid dynamics, Physics of Fluids B, on plasma physics. In 1994, the former continued under its original name, Physics of Fluids. Physics of Fluids was originally published with the American Physical Society's Division of Fluid Dynamics. In 2016, the American Physical Society founded Physical Review Fluids. From 1985 -- 2015, Physics of Fluids published the Gallery of Fluid Motion, containing award-winning photographs, visual streaming media of fluid flow. Starting in 2016, the Gallery of Fluid Motion will be published by the American Physical Society. According to the Journal Citation Reports, the journal has a 2014 factor of 2.031. Thomson Scientific. May 8, 2006. Retrieved 2009-04-20. Official website Physics of Fluids' Gallery of fluid motion
Physics of Fluids
–
Physics of Fluids
107.
Dortmund University of Technology
–
TU Dortmund University is a university in Dortmund, North Rhine-Westphalia, Germany with over 30,000 students, over 3,000 staff. It is situated in the Ruhr area, the fourth largest urban area in Europe. The university is highly ranked in the areas of physics, electrical engineering, economics. The University of Dortmund was founded during the decline of the coal and industry in the Ruhr region. Its establishment was seen to technology. The university's main areas of research are the natural sciences, engineering, pedagogy/teacher training in a wide spectrum of journalism. In 2006, The University of Dortmund hosted the 11th Federation of International Robot-soccer Association RoboWorld Cup. The Dortmund Droids, finished third in 2003. Following the Zeitgeist of the late 1960s in Germany, the university was built auf der grünen Wiese about 2 miles outside of downtown Dortmund. One of the most prominent buildings in the university is the Mathetower, which houses the faculty of Mathematics. The first point of registration for.de-domains was at the Dortmund University Department of Computer Science. The first.de-domain was www.uni-dortmund.de.
Dortmund University of Technology
–
Dortmund University's Mathetower
Dortmund University of Technology
–
Official logo of the TU Dortmund University
Dortmund University of Technology
–
Student hostels
Dortmund University of Technology
–
Campus Food Court
108.
National Diet Library
–
The National Diet Library is the only national library in Japan. It was established for the purpose of assisting members of the National Diet of Japan in researching matters of public policy. The library is similar in scope to the United States Library of Congress. The National Diet Library consists of several other branch libraries throughout Japan. Its need for information was "correspondingly small." The original Diet libraries "never developed either the services which might have made them vital adjuncts of genuinely responsible legislative activity." Until Japan's defeat, moreover, the executive had controlled all political documents, depriving the Diet of access to vital information. In 1946, each house of the Diet formed its own National Diet Library Standing Committee. Hani envisioned the new body as "both a ` citadel of popular sovereignty," and the means of realizing a "peaceful revolution." The National Diet Library opened with an initial collection of 100,000 volumes. The first Librarian of the Diet Library was the politician Tokujirō Kanamori. The philosopher Masakazu Nakai served as the first Vice Librarian. In 1949, the NDL became the only national library in Japan. At this time the collection gained an additional million volumes previously housed in the former National Library in Ueno. In 1961, the NDL opened at its present location in Nagatachō, adjacent to the National Diet.
National Diet Library
–
Tokyo Main Library of the National Diet Library
National Diet Library
–
Kansai-kan of the National Diet Library
National Diet Library
–
The National Diet Library
National Diet Library
–
Main building in Tokyo
109.
Computational fluid dynamics
–
Computational fluid dynamics is a branch of fluid mechanics that uses numerical analysis and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the speed of complex simulation scenarios such as transonic or turbulent flows. Experimental validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests. The fundamental basis of almost all CFD problems are the Navier -- Stokes equations, which define single-phase fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in supersonic flows these equations can be linearized to yield the linearized potential equations. Historically, methods were first developed to solve the potential equations. Two-dimensional methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s. Although they failed dramatically, these calculations, together with Richardson's book "Weather prediction by numerical process", set the basis for numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book. The power available paced development of three-dimensional methods. This group was led by Francis H. Harlow, widely considered as one of the pioneers of CFD.
Computational fluid dynamics
–
Computational physics
Computational fluid dynamics
–
A computer simulation of high velocity air flow around the Space Shuttle during re-entry.
Computational fluid dynamics
–
A simulation of the Hyper-X scramjet vehicle in operation at Mach -7
Computational fluid dynamics
–
Volume rendering of a non-premixed swirl flame as simulated by LES.