1.
Computational physics
–
Computational physics is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics was the first application of computers in science. In physics, different theories based on mathematical models provide very precise predictions on how systems behave, unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, in such cases, numerical approximations are required. There is a debate about the status of computation within the scientific method, while computers can be used in experiments for the measurement and recording of data, this clearly does not constitute a computational approach. Physics problems are in very difficult to solve exactly. This is due to several reasons, lack of algebraic and/or analytic solubility, complexity, on the more advanced side, mathematical perturbation theory is also sometimes used. In addition, the computational cost and computational complexity for many-body problems tend to grow quickly, a macroscopic system typically has a size of the order of 1023 constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system, because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Furthermore, computational physics encompasses the tuning of the structure to solve the problems. It is possible to find a corresponding computational branch for every field in physics, for example computational mechanics. Computational mechanics consists of fluid dynamics, computational solid mechanics. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics, the quantum many-body problem leads naturally to the large and rapidly growing field of computational chemistry. Computational solid state physics is an important division of computational physics dealing directly with material science. A field related to computational condensed matter is computational statistical mechanics, computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, it concerns itself with in the social sciences, network theory, and mathematical models for the propagation of disease. Computational astrophysics is the application of techniques and methods to astrophysical problems. Stickler, E. Schachinger, Basic concepts in computational physics, E. Winsberg, Science in the Age of Computer Simulation
Computational physics
–
Computational physics
2.
Scientific visualization
–
Scientific visualization is an interdisciplinary branch of science. It is also considered a subset of computer graphics, a branch of computer science, the purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand, illustrate, and glean insight from their data. One of the earliest examples of scientific visualisation was Maxwells thermodynamic surface. This prefigured modern scientific techniques that use computer graphics. Scientific visualization using computer graphics gained in popularity as graphics matured, primary applications were scalar fields and vector fields from computer simulations and also measured data. The primary methods for visualizing two-dimensional scalar fields are color mapping and drawing contour lines, 2D vector fields are visualized using glyphs and streamlines or line integral convolution methods. For 3D scalar fields the primary methods are volume rendering and isosurfaces, methods for visualizing vector fields include glyphs such as arrows, streamlines and streaklines, particle tracing, line integral convolution and topological methods. Later, visualization techniques such as hyperstreamlines were developed to visualize 2D, computer animation is the art, technique, and science of creating moving images via the use of computers. It is becoming common to be created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium and it is also referred to as CGI, especially when used in films. Computer simulation is a program, or network of computers. The simultaneous visualization and simulation of a system is called visulation, computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for months. Information visualization focused on the creation of approaches for conveying information in intuitive ways. The key difference between scientific visualization and information visualization is that information visualization is often applied to data that is not generated by scientific inquiry, some examples are graphical representations of data for business, government, news and social media. Interface technology and perception shows how new interfaces and an understanding of underlying perceptual issues create new opportunities for the scientific visualization community. Rendering is the process of generating an image from a model, the model is a description of three-dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture, lighting, the image is a digital image or raster graphics image. The term may be by analogy with a rendering of a scene
Scientific visualization
–
A scientific visualization of a simulation of a Rayleigh–Taylor instability caused by two mixing fluids.
Scientific visualization
–
Surface rendering of Arabidopsis thaliana pollen grains with confocal microscope.
Scientific visualization
–
Scientific visualization of Fluid Flow: Surface waves in water
Scientific visualization
–
Chemical imaging of a simultaneous release of SF 6 and NH 3.
3.
Lennard-Jones potential
–
The Lennard-Jones potential is a mathematically simple model that approximates the interaction between a pair of neutral atoms or molecules. A form of this potential was first proposed in 1924 by John Lennard-Jones. At rm, the function has the value −ε. The distances are related as rm = 21/6σ ≈1. 122σ and these parameters can be fitted to reproduce experimental data or accurate quantum chemistry calculations. Due to its simplicity, the Lennard-Jones potential is used extensively in computer simulations even though more accurate potentials exist. Differentiating the L-J potential with respect to r gives an expression for the net inter-molecular force between 2 molecules and this inter-molecular force may be attractive or repulsive, depending on the value of r. When r is very small, the 2 molecules repel each other, whereas the functional form of the attractive term has a clear physical justification, the repulsive term has no theoretical justification. It is used because it approximates the Pauli repulsion well, and is convenient due to the relative computing efficiency of calculating r12 as the square of r6. The Lennard-Jones potential was improved by the Buckingham potential later proposed by R. A. Buckingham, in which the part is an exponential function. The L-J potential is a good approximation. Due to its simplicity, it is used to describe the properties of gases. It is especially accurate for noble gas atoms, and is an approximation at long and short distances for neutral atoms. The lowest energy arrangement of a number of atoms described by a Lennard-Jones potential is a hexagonal close-packing. On raising temperature, the lowest free energy arrangement becomes cubic close packing, under pressure, the lowest energy structure switches between cubic and hexagonal close packing. Real materials include BCC structures also, other more recent methods, such as the Stockmayer potential, describe the interaction of molecules more accurately. Quantum chemistry methods, Møller–Plesset perturbation theory, coupled cluster method, or full configuration interaction can give accurate results. There are many different ways to formulate the Lennard-Jones potential. This form is a formulation that is used by some simulation software, V L J = A r 12 − B r 6
Lennard-Jones potential
–
Computational physics
Lennard-Jones potential
–
A graph of strength versus distance for the 12-6 Lennard-Jones potential.
4.
Morse potential
–
The Morse potential, named after physicist Philip M. Morse, is a convenient interatomic interaction model for the potential energy of a diatomic molecule. It is an approximation for the vibrational structure of the molecule than the QHO because it explicitly includes the effects of bond breaking. It also accounts for the anharmonicity of real bonds and the transition probability for overtone. The Morse potential can also be used to other interactions such as the interaction between an atom and a surface. Due to its simplicity, it is not used in modern spectroscopy, however, its mathematical form inspired the MLR potential, which is the most popular potential energy function used for fitting spectroscopic data. The dissociation energy of the bond can be calculated by subtracting the zero point energy E from the depth of the well. Since the zero of energy is arbitrary, the equation for the Morse potential can be rewritten any number of ways by adding or subtracting a constant value. This form approaches zero at r and equals − D e at its minimum. It clearly shows that the Morse potential is the combination of a short-range repulsion term, like the quantum harmonic oscillator, the energies and eigenstates of the Morse potential can be found using operator methods. One approach involves applying the method to the Hamiltonian. Whereas the energy spacing between levels in the quantum harmonic oscillator is constant at h ν0, the energy between adjacent levels decreases with increasing v in the Morse oscillator. Mathematically, the spacing of Morse levels is E − E = h ν0 −2 /2 D e and this trend matches the anharmonicity found in real molecules. However, this equation fails above some value of v m where E − E is calculated to be zero or negative, specifically, v m =2 D e − h ν0 h ν0. This failure is due to the number of bound levels in the Morse potential. For energies above v m, all the energy levels are allowed. Below v m, E is an approximation for the true vibrational structure in non-rotating diatomic molecules. An important extension of the Morse potential that made the Morse form very useful for spectroscopy is the MLR potential. The MLR potential is used as a standard for representing spectroscopic and/or virial data of diatomic molecules by a potential energy curve
Morse potential
–
Computational physics
5.
Finite difference method
–
Today, FDMs are the dominant approach to numerical solutions of partial differential equations. First, assuming the function whose derivatives are to be approximated is properly-behaved, by Taylors theorem, we can create a Taylor Series expansion f = f + f ′1. H n + R n, where n. denotes the factorial of n, the error in a methods solution is defined as the difference between the approximation and the exact analytical solution. To use a finite difference method to approximate the solution to a problem and this is usually done by dividing the domain into a uniform grid. Note that this means that finite-difference methods produce sets of numerical approximations to the derivative. An expression of general interest is the truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from an application of a method. That is, it is the quantity f ′ − f i ′ if f ′ refers to the exact value, the remainder term of a Taylor polynomial is convenient for analyzing the local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for f, N +1, where x 0 < ξ < x 0 + h, the dominant term of the local truncation error can be discovered. For example, again using the formula for the first derivative. 2, and with some algebraic manipulation, this leads to f − f i h = f ′ + f ″2, a final expression of this example and its order is, f − f i h = f ′ + O. This means that, in case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection, the data quality and simulation duration increase significantly with smaller step size. Therefore, a balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing speed in practice. However, time steps which are too large may create instabilities, the von Neumann method is usually applied to determine the numerical model stability. For example, consider the differential equation u ′ =3 u +2. The last equation is an equation, and solving this equation gives an approximate solution to the differential equation
Finite difference method
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
6.
Finite element method
–
The finite element method is a numerical method for solving problems of engineering and mathematical physics. It is also referred to as finite element analysis, typical problem areas of interest include structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. The analytical solution of these problems generally require the solution to boundary value problems for partial differential equations, the finite element method formulation of the problem results in a system of algebraic equations. The method yields approximate values of the unknowns at discrete number of points over the domain, to solve the problem, it subdivides a large problem into smaller, simpler parts that are called finite elements. The simple equations that model these finite elements are assembled into a larger system of equations that models the entire problem. FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function, the global system of equations has known solution techniques, and can be calculated from the initial values of the original problem to obtain a numerical answer. To explain the approximation in this process, FEM is commonly introduced as a case of Galerkin method. The process, in language, is to construct an integral of the inner product of the residual. In simple terms, it is a procedure that minimizes the error of approximation by fitting trial functions into the PDE, the residual is the error caused by the trial functions, and the weight functions are polynomial approximation functions that project the residual. These equation sets are the element equations and they are linear if the underlying PDE is linear, and vice versa. In step above, a system of equations is generated from the element equations through a transformation of coordinates from the subdomains local nodes to the domains global nodes. This spatial transformation includes appropriate orientation adjustments as applied in relation to the coordinate system. The process is carried out by FEM software using coordinate data generated from the subdomains. FEM is best understood from its application, known as finite element analysis. FEA as applied in engineering is a tool for performing engineering analysis. It includes the use of mesh generation techniques for dividing a complex problem into small elements, FEA is a good choice for analyzing problems over complicated domains, when the domain changes, when the desired precision varies over the entire domain, or when the solution lacks smoothness. For instance, in a crash simulation it is possible to increase prediction accuracy in important areas like the front of the car. Another example would be in weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena rather than relatively calm areas
Finite element method
–
Visualization of how a car deforms in an asymmetrical crash using finite element analysis. [1]
Finite element method
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
7.
Monte Carlo method
–
Monte Carlo methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. Their essential idea is using randomness to solve problems that might be deterministic in principle and they are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are used in three distinct problem classes, optimization, numerical integration, and generating draws from a probability distribution. In principle, Monte Carlo methods can be used to any problem having a probabilistic interpretation. By the law of numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean of independent samples of the variable. When the probability distribution of the variable is parametrized, mathematicians often use a Markov Chain Monte Carlo sampler, the central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired distribution, by the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler. In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation, in other instances we are given a flow of probability distributions with an increasing level of sampling complexity. These models can also be seen as the evolution of the law of the states of a nonlinear Markov chain. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methodologies these mean field particle techniques rely on sequential interacting samples, the terminology mean field reflects the fact that each of the samples interacts with the empirical measures of the process. Monte Carlo methods vary, but tend to follow a particular pattern, generate inputs randomly from a probability distribution over the domain. Perform a deterministic computation on the inputs, for example, consider a circle inscribed in a unit square. Given that the circle and the square have a ratio of areas that is π/4, uniformly scatter objects of uniform size over the square. Count the number of objects inside the circle and the number of objects. The ratio of the two counts is an estimate of the ratio of the two areas, which is π/4, multiply the result by 4 to estimate π. In this procedure the domain of inputs is the square that circumscribes our circle and we generate random inputs by scattering grains over the square then perform a computation on each input. Finally, we aggregate the results to obtain our final result, there are two important points to consider here, Firstly, if the grains are not uniformly distributed, then our approximation will be poor. Secondly, there should be a number of inputs
Monte Carlo method
–
Computational physics
8.
Molecular dynamics
–
Molecular dynamics is a computer simulation method for studying the physical movements of atoms and molecules, and is thus a type of N-body simulation. The atoms and molecules are allowed to interact for a period of time. The method was developed within the field of theoretical physics in the late 1950s but is applied today mostly in chemical physics, materials science. Following the earlier successes of Monte Carlo simulations, the method was developed by Fermi, Pasta, in 1957, Alder and Wainwright used an IBM704 computer to simulate perfectly elastic collisions between hard spheres. In 1960, Gibson et al. simulated radiation damage of solid copper by using a Born-Mayer type of repulsive interaction along with a surface force. In 1964, Rahman published landmark simulations of liquid argon that used a Lennard-Jones potential, calculations of system properties, such as the coefficient of self-diffusion, compared well with experimental data. Even before it became possible to simulate molecular dynamics with computers, the idea was to arrange them to replicate the properties of a liquid. I took a number of balls and stuck them together with rods of a selection of different lengths ranging from 2.75 to 4 inches. I tried to do this in the first place as casually as possible, working in my own office, being interrupted every five minutes or so and not remembering what I had done before the interruption. In physics, MD is used to examine the dynamics of atomic-level phenomena that cannot be observed directly, such as thin film growth and it is also used to examine the physical properties of nanotechnological devices that have not been or cannot yet be created. In principle MD can be used for ab initio prediction of protein structure by simulating folding of the chain from random coil. The results of MD simulations can be tested through comparison to experiments that measure molecular dynamics, michael Levitt, who shared the Nobel Prize awarded in part for the application of MD to proteins, wrote in 1999 that CASP participants usually did not use the method due to. A central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure, limits of the method are related to the parameter sets used, and to the underlying molecular mechanics force fields. The neglected contributions include the conformational entropy of the polypeptide chain, Another important factor are intramolecular hydrogen bonds, which are not explicitly included in modern force fields, but described as Coulomb interactions of atomic point charges. This is an approximation because hydrogen bonds have a partially quantum mechanical and chemical nature. Furthermore, electrostatic interactions are calculated using the dielectric constant of vacuum. Using the macroscopic dielectric constant at short distances is questionable. Finally, van der Waals interactions in MD are usually described by Lennard-Jones potentials based on the Fritz London theory that is applicable in vacuum
Molecular dynamics
Molecular dynamics
–
Example of a molecular dynamics simulation in a simple system: deposition of a single Cu atom on a Cu (001) surface. Each circle illustrates the position of a single atom; note that the actual atomic interactions used in current simulations are more complex than those of 2-dimensional hard spheres.
9.
John von Neumann
–
John von Neumann was a Hungarian-American mathematician, physicist, inventor, computer scientist, and polymath. He made major contributions to a number of fields, including mathematics, physics, economics, computing, and statistics. He published over 150 papers in his life, about 60 in pure mathematics,20 in physics, and 60 in applied mathematics and his last work, an unfinished manuscript written while in the hospital, was later published in book form as The Computer and the Brain. His analysis of the structure of self-replication preceded the discovery of the structure of DNA, also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935–1939, on the ergodic theorem, Princeton, 1931–1932. During World War II he worked on the Manhattan Project, developing the mathematical models behind the lenses used in the implosion-type nuclear weapon. After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, along with theoretical physicist Edward Teller, mathematician Stanislaw Ulam, and others, he worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. Von Neumann was born Neumann János Lajos to a wealthy, acculturated, Von Neumanns place of birth was Budapest in the Kingdom of Hungary which was then part of the Austro-Hungarian Empire. He was the eldest of three children and he had two younger brothers, Michael, born in 1907, and Nicholas, who was born in 1911. His father, Neumann Miksa was a banker, who held a doctorate in law and he had moved to Budapest from Pécs at the end of the 1880s. Miksas father and grandfather were both born in Ond, Zemplén County, northern Hungary, johns mother was Kann Margit, her parents were Jakab Kann and Katalin Meisels. Three generations of the Kann family lived in apartments above the Kann-Heller offices in Budapest. In 1913, his father was elevated to the nobility for his service to the Austro-Hungarian Empire by Emperor Franz Joseph, the Neumann family thus acquired the hereditary appellation Margittai, meaning of Marghita. The family had no connection with the town, the appellation was chosen in reference to Margaret, Neumann János became Margittai Neumann János, which he later changed to the German Johann von Neumann. Von Neumann was a child prodigy, as a 6 year old, he could multiply and divide two 8-digit numbers in his head, and could converse in Ancient Greek. When he once caught his mother staring aimlessly, the 6 year old von Neumann asked her, formal schooling did not start in Hungary until the age of ten. Instead, governesses taught von Neumann, his brothers and his cousins, Max believed that knowledge of languages other than Hungarian was essential, so the children were tutored in English, French, German and Italian. A copy was contained in a private library Max purchased, One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor. Von Neumann entered the Lutheran Fasori Evangelikus Gimnázium in 1911 and this was one of the best schools in Budapest, part of a brilliant education system designed for the elite
John von Neumann
–
Excerpt from the university calendars for 1928 and 1928–1929 of the Friedrich-Wilhelms-Universität Berlin announcing Neumann's lectures on axiomatic set theory and logics, problems in quantum mechanics and special mathematical functions. Notable colleagues were Georg Feigl, Issai Schur, Erhard Schmidt, Leó Szilárd, Heinz Hopf, Adolf Hammerstein and Ludwig Bieberbach.
John von Neumann
–
John von Neumann in the 1940s
John von Neumann
–
Julian Bigelow, Herman Goldstine, J. Robert Oppenheimer and John von Neumann at the Princeton Institute for Advanced Study.
John von Neumann
–
Von Neumann's gravestone
10.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, astrophysics. Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, fluid mechanics, especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can be complex, and can best be solved by numerical methods. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid mechanics problems, Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Inviscid flow was further analyzed by mathematicians and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille. Fluid statics or hydrostatics is the branch of mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium, and is contrasted with fluid dynamics, hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspect of geophysics and astrophysics, to meteorology, to medicine, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density and it has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in engineering and crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table, in a mechanical view, a fluid is a substance that does not support shear stress, that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress, the assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. This can be expressed as an equation in integral form over the control volume, the continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Fluid properties can vary continuously from one element to another and are average values of the molecular properties. The continuum hypothesis can lead to results in applications like supersonic speed flows. Those problems for which the continuum hypothesis fails, can be solved using statistical mechanics, to determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, the Navier–Stokes equations are differential equations that describe the force balance at a given point within a fluid
Fluid mechanics
–
Balance for some integrated fluid quantity in a control volume enclosed by a control surface.
11.
Fluid dynamics
–
In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids. It has several subdisciplines, including aerodynamics and hydrodynamics, before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, the foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy. These are based on mechanics and are modified in quantum mechanics. They are expressed using the Reynolds transport theorem, in addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects, however, the continuum assumption assumes that fluids are continuous, rather than discrete. The fact that the fluid is made up of molecules is ignored. The unsimplified equations do not have a general solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve, some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. Three conservation laws are used to solve fluid dynamics problems, the conservation laws may be applied to a region of the flow called a control volume. A control volume is a volume in space through which fluid is assumed to flow. The integral formulations of the laws are used to describe the change of mass, momentum. Mass continuity, The rate of change of fluid mass inside a control volume must be equal to the net rate of flow into the volume. Mass flow into the system is accounted as positive, and since the vector to the surface is opposite the sense of flow into the system the term is negated. The first term on the right is the net rate at which momentum is convected into the volume, the second term on the right is the force due to pressure on the volumes surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, the third term on the right is the net acceleration of the mass within the volume due to any body forces. Surface forces, such as forces, are represented by F surf. The following is the form of the momentum conservation equation
Fluid dynamics
12.
Supercomputer
–
A supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second instead of instructions per second. As of 2015, there are supercomputers which can perform up to quadrillions of FLOPS and it tops the rankings in the TOP500 supercomputer list. Sunway TaihuLights emergence is also notable for its use of indigenous chips, as of June 2016, China, for the first time, had more computers on the TOP500 list than the United States. However, U. S. built computers held ten of the top 20 positions, in November 2016 the U. S. has five of the top 10, throughout their history, they have been essential in the field of cryptanalysis. The use of multi-core processors combined with centralization is an emerging trend, the history of supercomputing goes back to the 1960s, with the Atlas at the University of Manchester and a series of computers at Control Data Corporation, designed by Seymour Cray. These used innovative designs and parallelism to achieve superior computational peak performance, Cray left CDC in 1972 to form his own company, Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, the Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the second fastest after M-13 supercomputer in Moscow. Fujitsus Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a speed of 1.7 gigaFLOPS per processor. The Hitachi SR2201 obtained a performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, the Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface. Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s, early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems, supercomputers of the 21st century can use over 100,000 processors connected by fast connections. The Connection Machine CM-5 supercomputer is a parallel processing computer capable of many billions of arithmetic operations per second. Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers, the large amount of heat generated by a system may also have other effects, e. g. reducing the lifetime of other system components. There have been diverse approaches to management, from pumping Fluorinert through the system. Systems with a number of processors generally take one of two paths
Supercomputer
–
IBM 's Blue Gene/P supercomputer at Argonne National Laboratory runs over 250,000 processors using normal data center air conditioning, grouped in 72 racks/cabinets connected by a high-speed optical network
Supercomputer
–
A Cray-1 preserved at the Deutsches Museum
Supercomputer
–
A Blue Gene /L cabinet showing the stacked blades, each holding many processors
Supercomputer
–
An IBM HS20 blade
13.
Wind tunnel
–
A wind tunnel is a tool used in aerodynamic research to study the effects of air moving past solid objects. A wind tunnel consists of a passage with the object under test mounted in the middle. Air is made to move past the object by a fan system or other means. The test object, often called a wind tunnel model, is instrumented with sensors to measure aerodynamic forces, pressure distribution. The earliest wind tunnels were invented towards the end of the 19th century, in the days of aeronautic research. In that way an observer could study the flying object in action. The development of wind tunnels accompanied the development of the airplane, large wind tunnels were built during World War II. Wind tunnel testing was considered of importance during the Cold War development of supersonic aircraft. Determining such forces was required before building codes could specify the required strength of such buildings, in these studies, the interaction between the road and the vehicle plays a significant role, and this interaction must be taken into consideration when interpreting the test results. The advances in fluid dynamics modelling on high speed digital computers has reduced the demand for wind tunnel testing. However, CFD results are not completely reliable and wind tunnels are used to verify CFD predictions. Air velocity and pressures are measured in several ways in wind tunnels, air velocity through the test section is determined by Bernoullis principle. Measurement of the pressure, the static pressure, and the temperature rise in the airflow. The direction of airflow around a model can be determined by tufts of yarn attached to the aerodynamic surfaces, the direction of airflow approaching a surface can be visualized by mounting threads in the airflow ahead of and aft of the test model. Smoke or bubbles of liquid can be introduced into the upstream of the test model. Aerodynamic forces on the test model are usually measured with beam balances, connected to the test model with beams, strings, or cables. Pressure distributions can more conveniently be measured by the use of pressure-sensitive paint, the strip is attached to the aerodynamic surface with tape, and it sends signals depicting the pressure distribution along its surface. The aerodynamic properties of an object can not all remain the same for a scaled model, however, by observing certain similarity rules, a very satisfactory correspondence between the aerodynamic properties of a scaled model and a full-size object can be achieved
Wind tunnel
–
NASA wind tunnel with the model of a plane.
Wind tunnel
–
A model Cessna with helium-filled bubbles showing pathlines of the wingtip vortices.
Wind tunnel
–
Replica of the Wright brothers' wind tunnel.
Wind tunnel
–
Eiffel's wind tunnels in the Auteuil laboratory
14.
Space Shuttle
–
The Space Shuttle was a partially reusable low Earth orbital spacecraft system operated by the U. S. National Aeronautics and Space Administration, as part of the Space Shuttle program. Its official program name was Space Transportation System, taken from a 1969 plan for a system of reusable spacecraft of which it was the only item funded for development, the first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982. Five complete Shuttle systems were built and used on a total of 135 missions from 1981 to 2011, the Shuttle fleets total mission time was 1322 days,19 hours,21 minutes and 23 seconds. Shuttle components included the Orbiter Vehicle, a pair of solid rocket boosters. The Shuttle was launched vertically, like a rocket, with the two SRBs operating in parallel with the OVs three main engines, which were fueled from the ET. The SRBs were jettisoned before the vehicle reached orbit, and the ET was jettisoned just before orbit insertion, at the conclusion of the mission, the orbiter fired its OMS to de-orbit and re-enter the atmosphere. The orbiter then glided as a spaceplane to a landing, usually at the Shuttle Landing Facility of KSC or Rogers Dry Lake in Edwards Air Force Base. After landing at Edwards, the orbiter was back to the KSC on the Shuttle Carrier Aircraft. The first orbiter, Enterprise, was built in 1976, used in Approach, four fully operational orbiters were initially built, Columbia, Challenger, Discovery, and Atlantis. Of these, two were lost in accidents, Challenger in 1986 and Columbia in 2003, with a total of fourteen astronauts killed. A fifth operational orbiter, Endeavour, was built in 1991 to replace Challenger, the Space Shuttle was retired from service upon the conclusion of Atlantiss final flight on July 21,2011. Nixons post-Apollo NASA budgeting withdrew support of all components except the Shuttle. The vehicle consisted of a spaceplane for orbit and re-entry, fueled by liquid hydrogen and liquid oxygen tanks. The first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982, all launched from the Kennedy Space Center, Florida. The system was retired from service in 2011 after 135 missions, the program ended after Atlantis landed at the Kennedy Space Center on July 21,2011. Major missions included launching numerous satellites and interplanetary probes, conducting space science experiments, the first orbiter vehicle, named Enterprise, was built for the initial Approach and Landing Tests phase and lacked engines, heat shielding, and other equipment necessary for orbital flight. A total of five operational orbiters were built, and of these and it was used for orbital space missions by NASA, the US Department of Defense, the European Space Agency, Japan, and Germany. The United States funded Shuttle development and operations except for the Spacelab modules used on D1, sL-J was partially funded by Japan
Space Shuttle
–
Discovery lifts off at the start of STS-120.
Space Shuttle
–
STS-129 ready for launch
Space Shuttle
–
President Nixon (right) with NASA Administrator Fletcher in January 1972, three months before Congress approved funding for the Shuttle program
Space Shuttle
–
STS-1 on the launch pad, December 1980
15.
Hyper-X
–
The X-43 was an unmanned experimental hypersonic aircraft with multiple planned scale variations meant to test various aspects of hypersonic flight. It was part of the X-plane series and specifically of NASAs Hyper-X program and it set several airspeed records for jet-propelled aircraft. The X-43 is the fastest aircraft on record at approximately Mach 9.6, a winged booster rocket with the X-43 placed on top, called a stack, was drop launched from a Boeing B-52 Stratofortress. After the booster rocket brought the stack to the speed and altitude, it was discarded, and the X-43 flew free using its own engine. The first plane in the series, the X-43A, was a single-use vehicle, the X-43 was part of NASAs Hyper-X program, involving the American space agency and contractors such as Boeing, Micro Craft Inc, Orbital Sciences Corporation and General Applied Science Laboratory. Micro Craft Inc. built the X-43A and GASL built its engine, one of the primary goals of NASAs Aeronautics Enterprise, as delineated in the NASA Strategic Plan, specified the development and demonstration of technologies for air-breathing hypersonic flight. Following the cancelation of the National Aerospace Plane program in November 1994, Langley was the lead center and responsible for hypersonic technology development. Dryden was responsible for flight research, phase I was a seven-year, approximately $230 million, program to flight-validate scramjet propulsion, hypersonic aerodynamics and design methods. Subsequent phases were not continued as the X-43 series of aircraft was replaced by the X-51, the X-43A aircraft was a small unpiloted test vehicle measuring just over 3.7 m in length. The vehicle was a body design, where the body of the aircraft provides a significant amount of lift for flight. The aircraft weighed roughly 3,000 pounds, the X-43A was designed to be fully controllable in high-speed flight, even when gliding without propulsion. However, the aircraft was not designed to land and be recovered, test vehicles crashed into the Pacific Ocean when the test was over. Traveling at Mach speeds produces a lot of heat due to the shock waves involved in supersonic drag. At high Mach speeds, heat can become so intense that metal portions of the airframe melt, the X-43A compensated for this by cycling water behind the engine cowl and sidewall leading edges, cooling those surfaces. In tests, the circulation was activated at about Mach 3. The X-43As developers designed the aircrafts airframe to be part of the propulsion system, the engine of the X-43A was primarily fueled with hydrogen. In the successful test, about two pounds of the fuel was used, unlike rockets, scramjet-powered vehicles do not carry oxygen on board for fueling the engine. Removing the need to carry oxygen significantly reduces the vehicles size, in the future, such lighter vehicles could take heavier payloads into space or carry payloads of the same weight much more efficiently
Hyper-X
–
Pegasus booster accelerating NASA's X-43A shortly after ignition during test flight (March 27, 2004)
Hyper-X
–
Artist's concept of X-43A with scramjet attached to the underside
Hyper-X
–
NASA's B-52B launch aircraft takes off carrying the X-43A hypersonic research vehicle (March 27, 2004)
Hyper-X
–
Full-scale model of the X-43 plane in Langley's 8-foot (2.4 m), high-temperature wind tunnel.
16.
Euler equations (fluid dynamics)
–
In fluid dynamics, the Euler equations are a set of quasilinear hyperbolic equations governing adiabatic and inviscid flow. They are named after Leonhard Euler, in fact, Euler equations can be obtained by linearization of some more precise continuity equations like Navier–Stokes equations in a local equilibrium state given by a Maxwellian. The Euler equations can be applied to incompressible and to compressible flow – assuming the flow velocity is a solenoidal field, historically, only the incompressible equations have been derived by Euler. However, fluid dynamics literature often refers to the full set – including the energy equation – of the more general compressible equations together as the Euler equations, from the mathematical point of view, Euler equations are notably hyperbolic conservation equations in the case without external field. In fact, like any Cauchy equation, the Euler equations originally formulated in convective form can also be put in the conservation form, the convective form emphasizes changes to the state in a frame of reference moving with the fluid. The Euler equations first appeared in published form in Eulers article Principes généraux du mouvement des fluides and they were among the first partial differential equations to be written down. At the time Euler published his work, the system of equations consisted of the momentum and continuity equations, an additional equation, which was later to be called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816. G represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electric field acceleration, the first equation is the Euler momentum equation with uniform density. The second equation is the constraint, stating the flow velocity is a solenoidal field. Notably, the continuity equation would be required also in this case as an additional third equation in case of density varying in time or varying in space. The equations above thus represent respectively conservation of mass and momentum, in 3D for example N =3 and the r and u vectors are explicitly and. Flow velocity and pressure are the physical variables. In 3D N =3 and the r and u vectors are explicitly and, although Euler first presented these equations in 1755, many fundamental questions about them remain unanswered. In three space dimensions it is not even known whether solutions of the equations are defined for all time or if they form singularities, in order to make the equations dimensionless, a characteristic length r 0, and a characteristic velocity u 0, need to be defined. These should be such that the dimensionless variables are all of order one. The limit of high Froude numbers is thus notable and can be studied with perturbation theory, the conservation form emphasizes the mathematical properties of Euler equations, and especially the contracted form is often the most convenient one for computational fluid dynamics simulations. Computationally, there are advantages in using the conserved variables. This gives rise to a class of numerical methods called conservative methods
Euler equations (fluid dynamics)
–
The "Streamline curvature theorem" states that the pressure at the upper surface of an airfoil is lower than the pressure far away and that the pressure at the lower surface is higher than the pressure far away; hence the pressure difference between the upper and lower surfaces of an airfoil generates a lift force.
17.
Vorticity
–
Conceptually, vorticity could be determined by marking the part of continuum in a small neighborhood of the point in question, and watching their relative displacements as they move along the flow. The vorticity vector would be twice the angular velocity vector of those particles relative to their center of mass. This quantity must not be confused with the velocity of the particles relative to some other point. More precisely, the vorticity is a pseudovector field ω→, defined as the curl of the flow velocity u→ vector, the definition can be expressed by the vector analysis formula, ω → ≡ ∇ × u →, where ∇ is the del operator. The vorticity of a flow is always perpendicular to the plane of the flow. The vorticity is related to the flows circulation along a path by the Stokes theorem. Namely, for any infinitesimal surface element C with normal direction n→ and area dA, many phenomena, such as the blowing out of a candle by a puff of air, are more readily explained in terms of vorticity rather than the basic concepts of pressure and velocity. This applies, in particular, to the formation and motion of vortex rings, in a mass of continuum that is rotating like a rigid body, the vorticity is twice the angular velocity vector of that rotation. This is the case, for example, of water in a tank that has been spinning for a while around its vertical axis, the vorticity may be nonzero even when all particles are flowing along straight and parallel pathlines, if there is shear. The vorticity will be zero on the axis, and maximum near the walls, conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the ideal irrotational vortex, where most particles rotate about some straight axis, another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the continuum becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted by ω→, defined as the curl or rotational of the velocity field v→ describing the continuum motion. In Cartesian coordinates, ω → = ∇ × v → = × = In words, the evolution of the vorticity field in time is described by the vorticity equation, which can be derived from the Navier–Stokes equations. This is clearly true in the case of 2-D potential flow, Vorticity is a useful tool to understand how the ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes a diffusion of vorticity away from the vortex cores into the flow field. This flow is accounted for by the term in the vorticity transport equation. Thus, in cases of very viscous flows, the vorticity will be diffused throughout the flow field, a vortex line or vorticity line is a line which is everywhere tangent to the local vorticity vector. Vortex lines are defined by the relation d x ω x = d y ω y = d z ω z, a vortex tube is the surface in the continuum formed by all vortex-lines passing through a given closed curve in the continuum
Vorticity
–
Continuum mechanics
Vorticity
–
Example flows:
18.
Supersonic
–
Supersonic travel is a rate of travel of an object that exceeds the speed of sound. For objects traveling in dry air of a temperature of 20 °C at sea level, speeds greater than five times the speed of sound are often referred to as hypersonic. Flights during which some parts of the air surrounding an object, such as the ends of rotor blades. This occurs typically somewhere between Mach 0.8 and Mach 1.23, sounds are traveling vibrations in the form of pressure waves in an elastic medium. In gases, sound travels longitudinally at different speeds, mostly depending on the mass and temperature of the gas. Since air temperature and composition varies significantly with altitude, Mach numbers for aircraft may change despite a constant travel speed, in water at room temperature supersonic speed can be considered as any speed greater than 1,440 m/s. In solids, sound waves can be polarized longitudinally or transversely and have higher velocities. Supersonic fracture is crack motion faster than the speed of sound in a brittle material, at the beginning of the 20th century, the term supersonic was used as an adjective to describe sound whose frequency is above the range of normal human hearing. The modern term for this meaning is ultrasonic, the tip of a bullwhip is thought to be the first man-made object to break the sound barrier, resulting in the telltale crack. The wave motion traveling through the bullwhip is what makes it capable of achieving supersonic speeds, most modern fighter aircraft are supersonic aircraft, but there have been supersonic passenger aircraft, namely Concorde and the Tupolev Tu-144. Both these passenger aircraft and some modern fighters are also capable of supercruise, since Concordes final retirement flight on November 26,2003, there are no supersonic passenger aircraft left in service. Some large bombers, such as the Tupolev Tu-160 and Rockwell B-1 Lancer are also supersonic-capable, most modern firearm bullets are supersonic, with rifle projectiles often travelling at speeds approaching and in some cases well exceeding Mach 3. Most spacecraft, most notably the Space Shuttle are supersonic at least during portions of their reentry, during ascent, launch vehicles generally avoid going supersonic below 30 km to reduce air drag. Note that the speed of sound decreases somewhat with altitude, due to lower temperatures found there, at even higher altitudes the temperature starts increasing, with the corresponding increase in the speed of sound. When an inflated balloon is burst, the pieces of latex contracts at a supersonic speed. Supersonic aerodynamics is simpler than subsonic aerodynamics because the airsheets at different points along the plane often cant affect each other, Supersonic jets and rocket vehicles require several times greater thrust to push through the extra aerodynamic drag experienced within the transonic region. Designers use the Supersonic area rule and the Whitcomb area rule to minimize changes in size. However, in applications, a supersonic aircraft will have to operate stably in both subsonic and supersonic profiles, hence aerodynamic design is more complex
Supersonic
–
A United States Navy F/A-18F Super Hornet in transonic flight
Supersonic
–
U.S. Navy F/A-18 approaching the sound barrier. The white cloud forms as a result of the supersonic expansion fans dropping the air temperature below the dew point.
19.
Hypersonic
–
In aerodynamics, a hypersonic speed is one that is highly supersonic. Since the 1970s, the term has generally assumed to refer to speeds of Mach 5. The hypersonic regime is often defined as speeds where ramjets do not produce net thrust. The peculiarity in hypersonic flows are as follows, Shock layer Aerodynamic heating Entropy layer Real gas effects Low density effects Independence of aerodynamic coefficients with Mach number. As a bodys Mach number increases, the density behind a bow shock generated by the body also increases, consequently, the distance between the bow shock and the body decreases at higher Mach numbers. As Mach numbers increase, the change across the shock also increases. A portion of the kinetic energy associated with flow at high Mach numbers transforms into internal energy in the fluid due to viscous effects. The increase in energy is realized as an increase in temperature. This causes the bottom of the layer to expand, so that the boundary layer over the body grows thicker. Although subsonic and supersonic usually refer to speeds below and above the speed of sound respectively. Generally, NASA defines high hypersonic as any Mach number from 10 to 25, among the aircraft operating in this regime are the Space Shuttle and various developing spaceplanes. In the following table, the regimes or ranges of Mach values are referenced instead of the meanings of subsonic and supersonic. The categorization of airflow relies on a number of similarity parameters, for transonic and compressible flow, the Mach and Reynolds numbers alone allow good categorization of many flow cases. Hypersonic flows, however, require other similarity parameters, first, the analytic equations for the oblique shock angle become nearly independent of Mach number at high Mach numbers. Second, the formation of strong shocks around aerodynamic bodies means that the freestream Reynolds number is useful as an estimate of the behavior of the boundary layer over a body. Finally, the temperature of hypersonic flows mean that real gas effects become important. For this reason, research in hypersonics is often referred to as aerothermodynamics, the introduction of real gas effects means that more variables are required to describe the full state of a gas. This means that for a flow, something between 10 and 100 variables may be required to describe the state of the gas at any given time
Hypersonic
–
NASA X-43 at Mach 7
20.
Conformal transformation
–
In mathematics, a conformal map is a function that preserves angles locally. In the most common case, the function has a domain, more formally, let U and V be subsets of C n. A function f, U → V is called conformal at a point u 0 ∈ U if it preserves oriented angles between curves through u 0 with respect to their orientation. Conformal maps preserve both angles and the shapes of small figures, but not necessarily their size or curvature. The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation, if the Jacobian matrix of the transformation is everywhere a scalar times a rotation matrix, then the transformation is conformal. Conformal maps can be defined between domains in higher-dimensional Euclidean spaces, and more generally on a Riemannian or semi-Riemannian manifold, an important family of examples of conformal maps comes from complex analysis. If U is a subset of the complex plane C, then a function f, U → C is conformal if and only if it is holomorphic. If f is antiholomorphic, it preserves angles, but it reverses their orientation. In the literature, there is another definition of conformal maps, since a one-to-one map defined on a non-empty open set cannot be constant, the open mapping theorem forces the inverse function to be holomorphic. Thus, under this definition, a map is conformal if, the two definitions for conformal maps are not equivalent. Being one-to-one and holomorphic implies having a non-zero derivative, however, the exponential function is a holomorphic function with a nonzero derivative, but is not one-to-one since it is periodic. A map of the complex plane onto itself is conformal if. Again, for the conjugate, angles are preserved, but orientation is reversed, an example of the latter is taking the reciprocal of the conjugate, which corresponds to circle inversion with respect to the unit circle. This can also be expressed as taking the reciprocal of the coordinate in circular coordinates. In Riemannian geometry, two Riemannian metrics g and h on smooth manifold M are called equivalent if g = u h for some positive function u on M. The function u is called the conformal factor, a diffeomorphism between two Riemannian manifolds is called a conformal map if the pulled back metric is conformally equivalent to the original one. For example, stereographic projection of a sphere onto the plane augmented with a point at infinity is a conformal map, one can also define a conformal structure on a smooth manifold, as a class of conformally equivalent Riemannian metrics. If a function is harmonic over a domain, and is transformed via a conformal map to another plane domain
Conformal transformation
–
A rectangular grid (top) and its image under a conformal map f (bottom). It is seen that f maps pairs of lines intersecting at 90° to pairs of curves still intersecting at 90°.
21.
Cylinder (geometry)
–
In its simplest form, a cylinder is the surface formed by the points at a fixed distance from a given straight line called the axis of the cylinder. It is one of the most basic curvilinear geometric shapes, commonly the word cylinder is understood to refer to a finite section of a right circular cylinder having a finite height with circular ends perpendicular to the axis as shown in the figure. If the ends are open, it is called an open cylinder, if the ends are closed by flat surfaces it is called a solid cylinder. The formulae for the area and the volume of such a cylinder have been known since deep antiquity. The area of the side is known as the lateral area. An open cylinder does not include either top or bottom elements, the surface area of a closed cylinder is made up the sum of all three components, top, bottom and side. Its surface area is A = 2πr2 + 2πrh = 2πr = πd=L+2B, for a given volume, the closed cylinder with the smallest surface area has h = 2r. Equivalently, for a surface area, the closed cylinder with the largest volume has h = 2r. Cylindric sections are the intersections of cylinders with planes, for a right circular cylinder, there are four possibilities. A plane tangent to the cylinder meets the cylinder in a straight line segment. Moved while parallel to itself, the plane either does not intersect the cylinder or intersects it in two line segments. All other planes intersect the cylinder in an ellipse or, when they are perpendicular to the axis of the cylinder, a cylinder whose cross section is an ellipse, parabola, or hyperbola is called an elliptic cylinder, parabolic cylinder, or hyperbolic cylinder respectively. Elliptic cylinders are also known as cylindroids, but that name is ambiguous, as it can also refer to the Plücker conoid. The volume of a cylinder with height h is V = ∫0 h A d x = ∫0 h π a b d x = π a b ∫0 h d x = π a b h. Even more general than the cylinder is the generalized cylinder. The cylinder is a degenerate quadric because at least one of the coordinates does not appear in the equation, an oblique cylinder has the top and bottom surfaces displaced from one another. There are other unusual types of cylinders. Let the height be h, internal radius r, and external radius R, the volume is given by V = π h
Cylinder (geometry)
–
Tycho Brahe Planetarium building, Copenhagen, its roof being an example of a cylindric section
Cylinder (geometry)
–
A right circular cylinder with radius r and height h.
Cylinder (geometry)
–
In projective geometry, a cylinder is simply a cone whose apex is at infinity, which corresponds visually to a cylinder in perspective appearing to be a cone towards the sky.
22.
Los Alamos National Lab
–
Los Alamos National Laboratory is one of two laboratories in the United States in which classified work towards the design of nuclear weapons has been undertaken. LANL is a United States Department of Energy national laboratory, managed and operated by Los Alamos National Security, located in Los Alamos, the laboratory is one of the largest science and technology institutions in the world. It conducts multidisciplinary research in such as national security, space exploration, renewable energy, medicine, nanotechnology. General Leslie Groves wanted a central laboratory at a location for safety. It should be at least 200 miles from international boundaries and west of the Mississippi, major John Dudley suggested Oak City, Utah or Jemez Springs, New Mexico but both were rejected. Manhattan Project scientific director J. Robert Oppenheimer had spent much time in his youth in the New Mexico area, Dudley had rejected the school as not meeting Groves’ criteria, but as soon as Groves saw it he said in effect This is the place. Oppenheimer became the laboratorys first director, during the Manhattan Project, Los Alamos hosted thousands of employees, including many Nobel Prize-winning scientists. The location was a total secret and its only mailing address was a post office box, number 1663, in Santa Fe, New Mexico. Eventually two other post office boxes were used,180 and 1539, also in Santa Fe, though its contract with the University of California was initially intended to be temporary, the relationship was maintained long after the war. The work of the laboratory culminated in the creation of several devices, one of which was used in the first nuclear test near Alamogordo, New Mexico, codenamed Trinity. The other two were weapons, Little Boy and Fat Man, which were used in the attacks on Hiroshima, the Laboratory received the Army-Navy ‘E’ Award for Excellence in production on October 16,1945. Many of the original Los Alamos luminaries chose to leave the laboratory, in the years since the 1940s, Los Alamos was responsible for the development of the hydrogen bomb, and many other variants of nuclear weapons. In 1952, Lawrence Livermore National Laboratory was founded to act as Los Alamos competitor, Los Alamos and Livermore served as the primary classified laboratories in the U. S. national laboratory system, designing all the countrys nuclear arsenal. Additional work included basic research, particle accelerator development, health physics. Many nuclear tests were undertaken in the Marshall Islands and at the Nevada Test Site, during the late-1950s, a number of scientists including Dr. J. Robert Bob Beyster left Los Alamos to work for General Atomics in San Diego. Three major nuclear-related accidents have occurred at LANL, criticality accidents occurred in August 1945 and May 1946, and a third accident occurred during an annual physical inventory in December 1958. Several buildings associated with the Manhattan Project at Los Alamos were declared a National Historic Landmark in 1965, Los Alamos nuclear work is currently thought to relate primarily to computer simulations and stockpile stewardship. The development of the Dual-Axis Radiographic Hydrodynamic Test Facility will allow complex simulations of nuclear tests to take place without full explosive yields, the lab has made intense efforts for humanitarian causes through its scientific research in medicine
Los Alamos National Lab
–
Aerial view
Los Alamos National Lab
–
Los Alamos National Laboratory
Los Alamos National Lab
–
The first stages of the explosion of the Trinity nuclear test.
Los Alamos National Lab
–
Sites
23.
Lockheed Corporation
–
The Lockheed Corporation was an American aerospace company. Lockheed was founded in 1912 and later merged with Martin Marietta to form Lockheed Martin in 1995, the Alco Hydro-Aeroplane Company was established in San Francisco in 1912 by the brothers Allan and Malcolm Loughead. Following the Model F-1, the company invested heavily in the design, however, the asking price of $2500 could not compete in a market that was saturated with post World War 1 $350 Curtiss JN-4s and De Haviland trainers. The Loughead Aircraft Manufacturing Company closed its doors in 1921, in 1926, Allan Loughead, Jack Northrop, and Kenneth Jay secured funding to form the Lockheed Aircraft Company in Hollywood. This new company utilized some of the technology originally developed for the Model S-1 to design the Vega Model. In March 1928, the relocated to Burbank, California. From 1926-28 the company produced over 80 aircraft and employed more than 300 workers who by April 1929 were building five aircraft per week, in July 1929, majority shareholder Fred Keeler sold 87% of the Lockheed Aircraft Company to Detroit Aircraft Corporation. In August 1929, Allan Lockheed resigned, the Great Depression ruined the aircraft market, and Detroit Aircraft went bankrupt. A group of headed by brothers Robert and Courtland Gross. The syndicate bought the company for a mere $40,000, ironically, Allan Lockheed himself had planned to bid for his own company, but had raised only $50,000, which he felt was too small a sum for a serious bid. In 1934, Robert E. Gross was named chairman of the new company, the Lockheed Aircraft Corporation and his brother Courtlandt S. Gross was a co-founder and executive, succeeding Robert as Chairman following his death in 1961. The company was named the Lockheed Corporation in 1977, in the 1930s, Lockheed spent $139,400 to develop the Model 10 Electra, a small twin-engined transport. The company sold 40 in the first year of production, amelia Earhart and her navigator, Fred Noonan, flew it in their failed attempt to circumnavigate the world in 1937. Subsequent designs, the Lockheed Model 12 Electra Junior and the Lockheed Model 14 Super Electra expanded their market. The Lockheed Model 14 formed the basis for the Hudson bomber and its primary role was submarine hunting. The Model 14 Super Electra were sold abroad, and more than 100 were license-built in Japan for use by the Imperial Japanese Army, the P-38 was the only American fighter aircraft in production throughout American involvement in the war, from Pearl Harbor to Victory over Japan Day. It filled ground-attack, air-to-air, and even strategic bombing roles in all theaters of the war in which the United States operated, the Lockheed Vega factory was located next to Burbanks Union Airport which it had purchased in 1940. During the war, the area was camouflaged to fool enemy aerial reconnaissance
Lockheed Corporation
–
P-38J Lightning Yippee
Lockheed Corporation
–
P-38 Lightning assembly line at the Lockheed plant, Burbank, California in World War II. In June 1943, this assembly line was reconfigured into a mechanized line, which more than doubled the rate of production. The transition to the new system was accomplished in only eight days. During this time production never stopped. It was continued outdoors.
Lockheed Corporation
–
A Lockheed L-049 Constellation sporting the livery of Trans World Airlines at the Pima Air & Space Museum.
Lockheed Corporation
–
The Lockheed U-2, which first flew in 1955, provided intelligence on Soviet bloc countries.
24.
Douglas Aircraft Company
–
The Douglas Aircraft Company was an American aerospace manufacturer based in Southern California. It was founded in 1921 by Donald Wills Douglas, Sr. Douglas Aircraft Company largely operated as a division of McDonnell Douglas after the merger. MD later merged with Boeing in 1997, the Douglas Aircraft Company was founded by Donald Wills Douglas, Sr. on July 22,1921 in Santa Monica, California, following dissolution of the Davis-Douglas Company. An early claim to fame was the first circumnavigation of the world by air in Douglas airplanes in 1924. In 1923, the U. S. Army Air Service was interested in carrying out a mission to circumnavigate the Earth for the first time by aircraft, Donald Douglas proposed a modified Douglas DT to meet the Armys needs. The two-place, open cockpit DT biplane torpedo bomber had previously produced for the U. S. Navy. The DTs were taken from the lines at the companys manufacturing plants in Rock Island, Illinois and Dayton. The modified aircraft known as the Douglas World Cruiser, also was the first major project for Jack Northrop who designed the system for the series. After the prototype was delivered in November 1923, upon the completion of tests on 19 November. Due to the expedition ahead, spare parts, including 15 extra Liberty L-12 engines,14 extra sets of pontoons. These were sent to airports along the route, the last of these aircraft was delivered to the U. S. Army on 11 March 1924. After the success of the World Cruiser, the Army Air Service ordered six similar aircraft as observation aircraft. The success of the DWC established the Douglas Aircraft Company among the aircraft companies of the world. Douglas adopted a logo that showed aircraft circling a globe, replacing the original winged heart logo, the logo evolved into an aircraft, a rocket, and a globe. It was later adopted by the McDonnell Douglas Corporation, and then became the basis of the current logo of the Boeing Company after their 1997 merger, many Douglas aircraft had long service lives. Douglas Aircraft designed and built a variety of aircraft for the U. S. military, including the Navy, Army Air Forces, Marine Corps, Air Force. The company initially built torpedo bombers for the U. S. Navy, within five years, the company was building about 100 aircraft annually. Among the early employees at Douglas were Ed Heinemann, Dutch Kindelberger, and Jack Northrop, the company retained its military market and expanded into amphibian airplanes in the late 1920s, also moving its facilities to Clover Field at Santa Monica, California
Douglas Aircraft Company
–
Machine tool operator at the Douglas Aircraft plant, Long Beach, California in World War II. After losing thousands of workers to military service, American manufacturers hired women for production positions, to the point where the typical aircraft plant's workforce was 40% female.
Douglas Aircraft Company
–
Women at work on bomber, Douglas Aircraft Company, Long Beach, California in October 1942
Douglas Aircraft Company
–
An ex-USAF C-47A Skytrain, the military version of the DC-3, on display in England in 2010. This aircraft flew from a base in Devon, England, during the Invasion of Normandy.
Douglas Aircraft Company
–
Douglas DC-3
25.
McDonnell Aircraft
–
The McDonnell Aircraft Corporation was an American aerospace manufacturer based in St. Louis, Missouri. McDonnell Aircraft later merged with the Douglas Aircraft Company to form McDonnell Douglas in 1967, McDonnell & Associates in Milwaukee, Wisconsin in 1928 to produce a personal aircraft for family use. The economic depression from 1929 ruined his plans and the company collapsed and he went to work for Glenn L. Martin. He left in 1938 to try again with his own firm, McDonnell Aircraft Corporation, based near St. Louis, Missouri, world War II was a major boost to the new company. It grew from 15 employees in 1939 to 5,000 at the end the war and became a significant aircraft parts producer, McDonnell also developed the LBD-1 Gargoyle guided missile. McDonnell Aircraft suffered after the war with an end of government orders and a surplus of aircraft, the advent of the Korean War helped push McDonnell into a major military fighter supply role. In 1943, McDonnell began developing jets when they were invited to bid on a US Navy contest, Dave Lewis joined the company as Chief of Aerodynamics in 1946. He led the development of the legendary F-4 Phantom II in 1954, Lewis became Executive Vice President in 1958, and finally became President and Chief Operating Officer in 1962. Lewis went on to manage Douglas Aircraft Division in 1967 after the McDonnell Douglas merger, in 1969, he returned to St. Louis as President of McDonnell Douglas. The company was now a major employer, but was having problems, with no civilian side of the company, every peacetime downturn in procurement led to lean times at McDonnell. McDonnell Aircraft and Douglas Aircraft began to sound each other out about a merger, inquiries began in 1963, Douglas offered bid invitations from December 1966 and accepted that of McDonnell. The two firms were merged on April 28,1967 as the McDonnell Douglas Corporation. In 1967, with the merger of McDonnell and Douglas Aircraft, Dave Lewis, then president of McDonnell, was named chairman of what was called the Long Beach, Lewis managed the turnaround of the division. McDonnell Douglas would later merge with Boeing in August 1997, boeings defense and space division is based in St. Louis, Missouri, U. S. and is responsible for defense and space products and services. McDonnell Douglass legacy product programs include the F-15 Eagle, AV-8B Harrier II, F/A-18 Hornet, McDonnell, nephew of founder and later President, CEO and Chair of McDonnell Douglas. Francillon, René J. McDonnell Douglas Aircraft since 1920, McDonnell Aircraft history 1939-45 McDonnell Aircraft history 1946-56 McDonnell Aircraft history 1957-67 McDonnell Gemini Space Program 1963-1966 List of all McDonnell model numbers through 1974
McDonnell Aircraft
–
An FH-1 Phantom, in 1948.
McDonnell Aircraft
–
McDonnell F2H Banshee, F3H Demon, and F4H Phantom II.
26.
NASA
–
President Dwight D. Eisenhower established NASA in 1958 with a distinctly civilian orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29,1958, disestablishing NASAs predecessor, the new agency became operational on October 1,1958. Since that time, most US space exploration efforts have led by NASA, including the Apollo Moon landing missions, the Skylab space station. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle, the agency is also responsible for the Launch Services Program which provides oversight of launch operations and countdown management for unmanned NASA launches. NASA shares data with various national and international such as from the Greenhouse Gases Observing Satellite. Since 2011, NASA has been criticized for low cost efficiency, from 1946, the National Advisory Committee for Aeronautics had been experimenting with rocket planes such as the supersonic Bell X-1. In the early 1950s, there was challenge to launch a satellite for the International Geophysical Year. An effort for this was the American Project Vanguard, after the Soviet launch of the worlds first artificial satellite on October 4,1957, the attention of the United States turned toward its own fledgling space efforts. This led to an agreement that a new federal agency based on NACA was needed to conduct all non-military activity in space. The Advanced Research Projects Agency was created in February 1958 to develop technology for military application. On July 29,1958, Eisenhower signed the National Aeronautics and Space Act, a NASA seal was approved by President Eisenhower in 1959. Elements of the Army Ballistic Missile Agency and the United States Naval Research Laboratory were incorporated into NASA, earlier research efforts within the US Air Force and many of ARPAs early space programs were also transferred to NASA. In December 1958, NASA gained control of the Jet Propulsion Laboratory, NASA has conducted many manned and unmanned spaceflight programs throughout its history. Some missions include both manned and unmanned aspects, such as the Galileo probe, which was deployed by astronauts in Earth orbit before being sent unmanned to Jupiter, the experimental rocket-powered aircraft programs started by NACA were extended by NASA as support for manned spaceflight. This was followed by a space capsule program, and in turn by a two-man capsule program. This goal was met in 1969 by the Apollo program, however, reduction of the perceived threat and changing political priorities almost immediately caused the termination of most of these plans. NASA turned its attention to an Apollo-derived temporary space laboratory, to date, NASA has launched a total of 166 manned space missions on rockets, and thirteen X-15 rocket flights above the USAF definition of spaceflight altitude,260,000 feet. The X-15 was an NACA experimental rocket-powered hypersonic research aircraft, developed in conjunction with the US Air Force, the design featured a slender fuselage with fairings along the side containing fuel and early computerized control systems
NASA
–
1963 photo showing Dr. William H. Pickering, (center) JPL Director, President John F. Kennedy, (right). NASA Administrator James Webb in background. They are discussing the Mariner program, with a model presented.
NASA
–
Seal of NASA
NASA
–
At launch control for the May 28, 1964, Saturn I SA-6 launch. Wernher von Braun is at center.
NASA
–
Mercury-Atlas 6 launch on February 20, 1962
27.
Ship
–
Historically, a ship was a sailing vessel with at least three square-rigged masts and a full bowsprit. Ships are generally distinguished from boats, based on size, shape, Ships have been important contributors to human migration and commerce. They have supported the spread of colonization and the trade, but have also served scientific, cultural. After the 16th century, new crops that had come from, Ship transport is responsible for the largest portion of world commerce. As of 2016, there were more than 49,000 merchant ships, of these 28% were oil tankers, 43% were bulk carriers, and 13% were container ships. Military forces operate vessels for naval warfare and to transport and support forces ashore, the top 50 navies had a median fleet of 88 surface vessels each, according to various sources. There is no definition of what distinguishes a ship from a boat. Ships can usually be distinguished from boats based on size and the ability to operate independently for extended periods. A legal definition of ship from Indian case law is a vessel that carries goods by sea, a common notion is that a ship can carry a boat, but not vice versa. American and British 19th Century maritime law distinguished vessels from other craft, ships and boats fall in one legal category, a number of large vessels are usually referred to as boats. Other types of vessel which are traditionally called boats are Great Lakes freighters, riverboats. Though large enough to carry their own boats and heavy cargoes, in most maritime traditions ships have individual names, and modern ships may belong to a ship class often named after its first ship. The first known vessels date back about 10,000 years ago, the first navigators began to use animal skins or woven fabrics as sails. Affixed to the top of a pole set upright in a boat and this allowed men to explore widely, allowing for the settlement of Oceania for example. By around 3000 BC, Ancient Egyptians knew how to assemble wooden planks into a hull and they used woven straps to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Sneferus ancient cedar wood ship Praise of the Two Lands is the first reference recorded to a ship being referred to by name, the ancient Egyptians were perfectly at ease building sailboats. A remarkable example of their skills was the Khufu ship. Aksum was known by the Greeks for having seaports for ships from Greece, a panel found at Mohenjodaro depicted a sailing craft
Ship
–
Italian full-rigged ship Amerigo Vespucci in New York Harbor, 1976
Ship
–
A raft is among the simplest boat designs.
Ship
–
Roman trireme mosaic from Carthage, Bardo Museum, Tunis.
Ship
–
A Japanese atakebune from the 16th century
28.
Aircraft
–
An aircraft is a machine that is able to fly by gaining support from the air. It counters the force of gravity by using either static lift or by using the lift of an airfoil. The human activity that surrounds aircraft is called aviation, crewed aircraft are flown by an onboard pilot, but unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as type, aircraft propulsion, usage. Each of the two World Wars led to technical advances. Consequently, the history of aircraft can be divided into five eras, Pioneers of flight, first World War,1914 to 1918. Aviation between the World Wars,1918 to 1939, Second World War,1939 to 1945. Postwar era, also called the jet age,1945 to the present day, aerostats use buoyancy to float in the air in much the same way that ships float on the water. They are characterized by one or more large gasbags or canopies, filled with a relatively low-density gas such as helium, hydrogen, or hot air, which is less dense than the surrounding air. When the weight of this is added to the weight of the aircraft structure, a balloon was originally any aerostat, while the term airship was used for large, powered aircraft designs – usually fixed-wing. In 1919 Frederick Handley Page was reported as referring to ships of the air, in the 1930s, large intercontinental flying boats were also sometimes referred to as ships of the air or flying-ships. – though none had yet been built, the advent of powered balloons, called dirigible balloons, and later of rigid hulls allowing a great increase in size, began to change the way these words were used. Huge powered aerostats, characterized by an outer framework and separate aerodynamic skin surrounding the gas bags, were produced. There were still no fixed-wing aircraft or non-rigid balloons large enough to be called airships, then several accidents, such as the Hindenburg disaster in 1937, led to the demise of these airships. Nowadays a balloon is an aerostat and an airship is a powered one. A powered, steerable aerostat is called a dirigible, sometimes this term is applied only to non-rigid balloons, and sometimes dirigible balloon is regarded as the definition of an airship. Non-rigid dirigibles are characterized by a moderately aerodynamic gasbag with stabilizing fins at the back and these soon became known as blimps. During the Second World War, this shape was adopted for tethered balloons, in windy weather
Aircraft
–
NASA test aircraft
Aircraft
–
The Mil Mi-8 is the most-produced helicopter in history
Aircraft
–
"Voodoo" a modified P 51 Mustang is the 2014 Reno Air Race Champion
Aircraft
–
A hot air balloon in flight
29.
Wind turbines
–
A wind turbine is a device that converts the winds kinetic energy into electrical power. Wind turbines are manufactured in a range of vertical and horizontal axis types. The smallest turbines are used for such as battery charging for auxiliary power for boats or caravans or to power traffic warning signs. Slightly larger turbines can be used for making contributions to a power supply while selling unused power back to the utility supplier via the electrical grid. Wind turbines were used in Persia about 500–900 A. D, the windwheel of Hero of Alexandria marks one of the first known instances of wind powering a machine in history. However, the first known practical wind turbines were built in Sistan and these Panemone were vertical axle wind turbines, which had long vertical drive shafts with rectangular blades. Made of six to twelve sails covered in reed matting or cloth material, these turbines were used to grind grain or draw up water. Wind turbines first appeared in Europe during the Middle Ages, the first historical records of their use in England date to the 11th or 12th centuries and there are reports of German crusaders taking their windmill-making skills to Syria around 1190. By the 14th century, Dutch wind turbines were in use to areas of the Rhine delta. Advanced wind mills were described by Croatian inventor Fausto Veranzio, in his book Machinae Novae he described vertical axis wind turbines with curved or V-shaped blades. The first electricity-generating wind turbine was a battery charging machine installed in July 1887 by Scottish academic James Blyth to light his home in Marykirk. Some months later American inventor Charles F, although Blyths turbine was considered uneconomical in the United Kingdom electricity generation by wind turbines was more cost effective in countries with widely scattered populations. In Denmark by 1900, there were about 2500 windmills for mechanical loads such as pumps and mills, the largest machines were on 24-meter towers with four-bladed 23-meter diameter rotors. By 1908 there were 72 wind-driven electric generators operating in the United States from 5 kW to 25 kW, around the time of World War I, American windmill makers were producing 100,000 farm windmills each year, mostly for water-pumping. By the 1930s, wind generators for electricity were common on farms, in this period, high-tensile steel was cheap, and the generators were placed atop prefabricated open steel lattice towers. A forerunner of modern wind generators was in service at Yalta. This was a 100 kW generator on a 30-meter tower, connected to the local 6.3 kV distribution system and it was reported to have an annual capacity factor of 32 percent, not much different from current wind machines. In the autumn of 1941, the first megawatt-class wind turbine was synchronized to a utility grid in Vermont, the Smith-Putnam wind turbine only ran for 1,100 hours before suffering a critical failure
Wind turbines
–
Offshore wind farm, using 5 MW turbines REpower 5M in the North Sea off the coast of Belgium.
Wind turbines
–
James Blyth's electricity-generating wind turbine, photographed in 1891
Wind turbines
–
The first automatically operated wind turbine, built in Cleveland in 1887 by Charles F. Brush. It was 60 feet (18 m) tall, weighed 4 tons (3.6 metric tonnes) and powered a 12 kW generator.
Wind turbines
–
Nordex N117/2400 in Germany, a modern low-wind turbine.
30.
Yacht
–
A yacht /ˈjɒt/ is a recreational boat or ship. In modern use of the term, yachts differ from working ships mainly by their leisure purpose, there are two different classes of yachts, sailing and power boats. With the rise of the steamboat and other types of powerboat, sailing vessels in general came to be perceived as luxury, later the term came to encompass large motor boats for primarily private pleasure purposes as well. Yacht lengths normally range from 10 metres up to dozens of meters, a luxury craft smaller than 12 metres is more commonly called a cabin cruiser or simply a cruiser. A superyacht generally refers to any yacht above 24 m and a megayacht generally refers to any yacht over 50 metres and this size is small in relation to typical cruise liners and oil tankers. A few countries have a special flag worn by recreational boats or ships, although inspired by the national flag, the yacht ensign does not always correspond with the civil or merchant ensign of the state in question. Yacht ensigns differ from merchant ensigns in order to signal that the yacht is not carrying cargo that requires a customs declaration, carrying commercial cargo on a boat with a yacht ensign is deemed to be smuggling in many jurisdictions. Until the 1950s, almost all yachts were made of wood or steel, although wood hulls are still in production, the most common construction material is fibreglass, followed by aluminium, steel, carbon fibre, and ferrocement. The use of wood has changed and is no longer limited to traditional board-based methods, wood is mostly used by hobbyists or wooden boat purists when building an individual boat. Apart from materials like carbon fibre and aramid fibre, spruce veneers laminated with epoxy resins have the best weight-to-strength ratios of all boatbuilding materials. Sailing yachts can range in length from about 6 metres to well over 30 metres. Most privately owned yachts fall in the range of about 7 metres -14 metres, in the United States, sailors tend to refer to smaller yachts as sailboats, while referring to the general sport of sailing as yachting. Within the limited context of racing, a yacht is any sailing vessel taking part in a race. Many modern racing yachts have efficient sail-plans, most notably the Bermuda rig. This capability is the result of a sail-plan and hull design oriented towards this capability, day sailing yachts are usually small, at under 6 metres in length. Sometimes called sailing dinghies, they often have a keel, centreboard. Most day sailing yachts do not have a cabin, as they are designed for hourly or daily use and not for overnight journeys. They may have a cabin, where the front part of the hull has a raised solid roof to provide a place to store equipment or to offer shelter from wind or spray
Yacht
–
Sailing Yacht "Zapata II"
Yacht
–
The "Lazzara" 80' "Alchemist" runs at full speed up the California Coast
Yacht
–
A yacht in Lorient, Brittany, France
Yacht
–
Aerial view of a yacht club and marina - Yacht Harbour Residence "Hohe Düne" in Rostock, Germany.
31.
Boundary layer
–
In the Earths atmosphere, the atmospheric boundary layer is the air layer near the ground affected by diurnal heat, moisture or momentum transfer to or from the surface. On an aircraft wing the boundary layer is the part of the close to the wing. Laminar boundary layers can be classified according to their structure. When a fluid rotates and viscous forces are balanced by the Coriolis effect, in the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously, the viscous nature of airflow reduces the local velocities on a surface and is responsible for skin friction. The layer of air over the surface that is slowed down or stopped by viscosity, is the boundary layer. There are two different types of boundary layer flow, laminar and turbulent, laminar Boundary Layer Flow The laminar boundary is a very smooth flow, while the turbulent boundary layer contains swirls or eddies. The laminar flow creates less skin friction drag than the turbulent flow, Boundary layer flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the edge, the laminar boundary layer increases in thickness. Turbulent Boundary Layer Flow At some distance back from the leading edge, the low energy laminar flow, however, tends to break down more suddenly than the turbulent layer. The aerodynamic boundary layer was first defined by Ludwig Prandtl in a paper presented on August 12,1904 at the third International Congress of Mathematicians in Heidelberg and this allows a closed-form solution for the flow in both areas, a significant simplification of the full Navier–Stokes equations. The majority of the transfer to and from a body also takes place within the boundary layer. The pressure distribution throughout the layer in the direction normal to the surface remains constant throughout the boundary layer. The thickness of the velocity boundary layer is defined as the distance from the solid body at which the viscous flow velocity is 99% of the freestream velocity. Displacement Thickness is an alternative definition stating that the boundary layer represents a deficit in mass compared to inviscid flow with slip at the wall. It is the distance by which the wall would have to be displaced in the case to give the same total mass flow as the viscous case. The no-slip condition requires the flow velocity at the surface of an object be zero. The flow velocity will then increase rapidly within the layer, governed by the boundary layer equations
Boundary layer
–
Ludwig Prandtl
32.
Grumman Aircraft
–
The Grumman Aircraft Engineering Corporation, later Grumman Aerospace Corporation, was a leading 20th century U. S. producer of military and civilian aircraft. Founded on December 6,1929, by Leroy Grumman and partners, factory in Baldwin on Long Island, New York. All of the early Grumman employees were former Loening employees, the company was named for Grumman because he was its largest investor. The company filed as a business on December 5,1929, keeping busy by welding aluminum tubing for truck frames, the company eagerly pursued contracts with the US Navy. Grumman designed the first practical floats with a landing gear for the Navy. The first Grumman aircraft was also for the Navy, the Grumman FF-1 and this was followed by a number of other successful designs. Grumman ranked 22nd among United States corporations in the value of wartime production contracts, Grummans first jet aircraft was the F9F Panther, it was followed by the upgraded F9F/F-9 Cougar, and the less well known F-11 Tiger in the 1950s. The companys big postwar successes came in the 1960s with the A-6 Intruder and E-2 Hawkeye and in the 1970s with the Grumman EA-6B Prowler, Grumman products were prominent in the film Top Gun and numerous World War II naval and Marine Corps aviation films. The U. S. Navy still employs the Hawkeye as part of Carrier Air Wings on board aircraft carriers, Grumman was the chief contractor on the Apollo Lunar Module that landed men on the moon. The firm received the contract on November 7,1962, as the Apollo program neared its end, Grumman was one of the main competitors for the contract to design and build the Space Shuttle, but lost to Rockwell International. The company ended up involved in the program nonetheless, as a subcontractor to Rockwell, providing the wings. In 1969 the company changed its name to Grumman Aerospace Corporation, the company built the Grumman Long Life Vehicle, a light transport mail truck designed for and used by the United States Postal Service. The LLV entered service in 1986, Gulfstream business jets continue to be currently manufactured by Gulfstream Aerospace which is a wholly owned subsidiary of General Dynamics. For much of the Cold War period Grumman was the largest corporate employer on Long Island, Grummans products were considered so reliable and ruggedly built that the company was often referred to as the Grumman Iron Works. At its peak in 1986 it employed 23,000 people on Long Island, a portion of the airport property has been used for the Grumman Memorial Park. Northrop Grummans remaining business at the Bethpage campus is the Battle Management and Engagement Systems Division, under the Grumman Olson brand it made the P-600 and P-6800 step vans for UPS. Grumman manufactured fire engines under the name Firecat and aerial tower trucks under the Aerialcat name, the company entered the fire apparatus business in 1976 with its purchase of Howe Fire Apparatus and ended operations in 1992. Grumman canoes were developed in 1944 as World War II was winding down, Company executive William Hoffman used the companys aircraft aluminum to replace the traditional wood design
Grumman Aircraft
–
Grumman Historical Marker
Grumman Aircraft
–
Grumman Corporation
Grumman Aircraft
–
Apollo Spacecraft: Apollo Lunar Module Diagram
Grumman Aircraft
–
F-14 Tomcat at Grumman Memorial Park, Calverton, New York
33.
New York University
–
New York University is a private nonprofit research university based in New York City. Founded in 1831, NYU is considered one of the worlds most influential research universities, University rankings compiled by Times Higher Education, U. S. News & World Report, and the Academic Ranking of World Universities all rank NYU amongst the top 32 universities in the world. NYU is a part of the creativity, energy and vibrancy that is Manhattan, located with its core in Greenwich Village. Among its faculty and alumni are 37 Nobel Laureates, over 30 Pulitzer Prize winners, over 30 Academy Award winners, alumni include heads of state, royalty, eminent mathematicians, inventors, media figures, Olympic medalists, CEOs of Fortune 500 companies, and astronauts. NYU alumni are among the wealthiest in the world, according to The Princeton Review, NYU is consistently considered by students and parents as a Top Dream College. Albert Gallatin, Secretary of Treasury under Thomas Jefferson and James Madison, declared his intention to establish in this immense, a system of rational and practical education fitting and graciously opened to all. A three-day-long literary and scientific convention held in City Hall in 1830 and these New Yorkers believed the city needed a university designed for young men who would be admitted based upon merit rather than birthright or social class. On April 18,1831, an institution was established, with the support of a group of prominent New York City residents from the merchants, bankers. Albert Gallatin was elected as the institutions first president, the university has been popularly known as New York University since its inception and was officially renamed New York University in 1896. In 1832, NYU held its first classes in rented rooms of four-story Clinton Hall, in 1835, the School of Law, NYUs first professional school, was established. American Chemical Society was founded in 1876 at NYU and it became one of the nations largest universities, with an enrollment of 9,300 in 1917. NYU had its Washington Square campus since its founding, the university purchased a campus at University Heights in the Bronx because of overcrowding on the old campus. NYU also had a desire to follow New York Citys development further uptown, NYUs move to the Bronx occurred in 1894, spearheaded by the efforts of Chancellor Henry Mitchell MacCracken. The University Heights campus was far more spacious than its predecessor was, as a result, most of the universitys operations along with the undergraduate College of Arts and Science and School of Engineering were housed there. NYUs administrative operations were moved to the new campus, but the schools of the university remained at Washington Square. In 1914, Washington Square College was founded as the undergraduate college of NYU. In 1935, NYU opened the Nassau College-Hofstra Memorial of New York University at Hempstead and this extension would later become a fully independent Hofstra University. In 1950, NYU was elected to the Association of American Universities, in the late 1960s and early 1970s, financial crisis gripped the New York City government and the troubles spread to the citys institutions, including NYU
New York University
–
Albert Gallatin
New York University
–
New York University
New York University
–
The University Heights campus, now home to Bronx Community College
New York University
–
The Silver Center c. 1900
34.
Antony Jameson
–
Antony Jameson FREng is Professor of Engineering in the Department of Aeronautics & Astronautics at Stanford University. Jameson is known for his work in the field of Computational Fluid Dynamics. He has published more than 300 scientific papers in a range of areas including computational fluid dynamics, aerodynamics. Born in Gillingham, Kent, UK Jameson spent much of his childhood in India where his father was stationed as a British Army Officer. He first attended school at St. Edwards School, Shimla, subsequently he was educated in England at Mowden Hall School and Winchester College. Jameson served as a Lieutenant in the British Army in 1953-1955, on coming out of the army he worked in the compressor design section of Bristol Aero-Engines in the summer of 1955, before studying engineering at Trinity Hall, Cambridge University. Jameson graduated with first class honors in 1958, subsequently he stayed on at Cambridge to obtain a Ph. D. in Magnetohydrodynamics, and he was a Research Fellow of Trinity Hall from 1960-1963. On leaving Cambridge he worked as an economist for the Trades Union Congress in 1964-1965 and he then became Chief Mathematician at Hawker Siddeley in Coventry. In 1966, Jameson joined the Aerodynamics Section of Grumman Aircraft Engineering Corporation in Bethpage, in this period, his work was largely directed toward the application of automatic control theory to stability augmentation systems. Starting in 1970, he began to concentrate on the problem of predicting transonic flow, existing numerical methods were not equal to the task, and it was clear that new methods would have to be developed. In 1972 Jameson moved to the Courant Institute of Mathematical Sciences at New York University, in 1974 he was appointed Professor of Computer Science at New York University. He joined Princeton University in 1980, and in 1982 he was appointed James Smith McDonnell Distinguished University Professor of Aerospace Engineering and he was Director of the Universitys Program in Applied and Computational Mathematics from 1986 to 1988. He is currently Professor of Engineering in the Department of Aeronautics and Astronautics, during his career, Professor Jameson has devised a variety of new schemes for solving the Euler and Navier-Stokes equations for inviscid and viscous compressible flows. For example, he devised a multigrid-scheme for the solution of steady flow problems, Jameson also wrote the FLO and SYN series of computer programs which have been widely used in the aircraft industry. In 1980 he received the NASA Medal for Exceptional Scientific Achievement in recognition of his work on transonic potential flow. In 1991 he was elected a Fellow of the American Institute of Aeronautics and Astronautics, in 1995 he was elected a Fellow of the Royal Society of London for Improving Natural Knowledge. In 1995, he was selected by ASME to receive The Spirit of St, in 1996 he was selected to receive the Theodorsen Lectureship Award from ICASE/NASA, Langley. In 1997 he was elected as a Foreign Associate to the National Academy of Engineering, in 2001 he received the degree Docteur Honoris Causa from the University of Paris, and in 2002 he received the degree Docteur Honoris Causa from Uppsala University
Antony Jameson
–
Antony Jameson in 2008
35.
Georgia Institute of Technology
–
The Georgia Institute of Technology is a public research university in Atlanta, Georgia, in the United States. It is a part of the University System of Georgia and has campuses in Savannah, Georgia, Metz, France, Athlone, Ireland, Shenzhen, China. The educational institution was founded in 1885 as the Georgia School of Technology as part of Reconstruction plans to build an economy in the post-Civil War Southern United States. Initially, it offered only a degree in mechanical engineering, by 1901, its curriculum had expanded to include electrical, civil, and chemical engineering. In 1948, the changed its name to reflect its evolution from a trade school to a larger and more capable technical institute. Today, Georgia Tech is organized into six colleges and contains about 31 departments/units, with emphasis on science and it is well recognized for its degree programs in engineering, computing, business administration, the sciences, design, and liberal arts. Student athletics, both organized and intramural, are a part of student and alumni life, Georgia Tech fields eight mens and seven womens teams that compete in the NCAA Division I athletics and the Football Bowl Subdivision. Georgia Tech is a member of the Coastal Division in the Atlantic Coast Conference, the idea of a technology school in Georgia was introduced in 1865 during the Reconstruction period. However, because the American South of that era was mainly populated by workers and few technical developments were occurring. In 1882, the Georgia State Legislature authorized a committee, led by Harris and they were impressed by the polytechnic educational models developed at the Massachusetts Institute of Technology and the Worcester County Free Institute of Industrial Science. On October 13,1885, Georgia Governor Henry D. McDaniel signed the bill to create, in 1887, Atlanta pioneer Richard Peters donated to the state 4 acres of the site of a failed garden suburb called Peters Park. The site was bounded on the south by North Avenue, and he then sold five adjoining acres of land to the state for US$10,000. This land was near Atlantas northern city limits at the time of its founding, the surrender of the city took place on the southwestern boundary of the modern Georgia Tech campus in 1864. The Georgia School of Technology opened in the fall of 1888 with two buildings, One building had classrooms to teach students, The second building featured a shop and had a foundry, forge, boiler room, and engine room. It was designed for students to work and produce goods to sell, on October 20,1905, U. S. President Theodore Roosevelt visited Georgia Tech. On the steps of Tech Tower, Roosevelt delivered a speech about the importance of technological education and he then shook hands with every student. Georgia Techs Evening School of Commerce began holding classes in 1912, the evening school admitted its first female student in 1917, although the state legislature did not officially authorize attendance by women until 1920. Annie T. Wise became the first female graduate in 1919 and was Georgia Techs first female faculty member the following year
Georgia Institute of Technology
–
Atlanta during the Civil War (c. 1864)
Georgia Institute of Technology
–
Georgia Institute of Technology
Georgia Institute of Technology
–
An early picture of Georgia Tech
Georgia Institute of Technology
–
Former Georgia Tech President G. Wayne Clough speaks at a student meeting.
36.
Overflow (software)
–
OVERFLOW - the OVERset grid FLOW solver - is a software package for simulating fluid flow around solid bodies using computational fluid dynamics. It is a compressible 3-D flow solver that solves the time-dependent, Reynolds-averaged, OVERFLOW was developed as part of a collaborative effort between NASAs Johnson Space Center in Houston, Texas and NASA Ames Research Center in Moffett Field, California. The driving force behind this work was the need for evaluating the flow about the Space Shuttle launch vehicle, scientists use OVERFLOW to better understand the aerodynamic forces on a vehicle by evaluating the flowfield surrounding the vehicle. OVERFLOW has also used to simulate the effect of debris on the space shuttle launch vehicle. Computational fluid dynamics Official NASA OVERFLOW CFD Code web site Article on OVERFLOW from NASA Insights
Overflow (software)
–
This image depicts the flowfield around the Space Shuttle Launch Vehicle traveling at Mach 2.46 and at an altitude of 66,000 feet (20,000 m). The surface of the vehicle is colored by the pressure coefficient, and the gray contours represent the density of the surrounding air, as calculated using the OVERFLOW codes.
37.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
Geometry
–
Visual checking of the Pythagorean theorem for the (3, 4, 5) triangle as in the Chou Pei Suan Ching 500–200 BC.
Geometry
–
An illustration of Desargues' theorem, an important result in Euclidean and projective geometry
Geometry
–
Geometry lessons in the 20th century
Geometry
–
A European and an Arab practicing geometry in the 15th century.
38.
Computer-aided design
–
Computer-aided design is the use of computer systems to aid in the creation, modification, analysis, or optimization of a design. CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, CAD output is often in the form of electronic files for print, machining, or other manufacturing operations. The term CADD is also used and its use in designing electronic systems is known as electronic design automation, or EDA. In mechanical design it is known as mechanical design automation or computer-aided drafting, however, it involves more than just shapes. CAD may be used to design curves and figures in space, or curves, surfaces. CAD is also used to produce computer animation for special effects in movies, advertising and technical manuals. The modern ubiquity and power of computers means that even perfume bottles, because of its enormous economic importance, CAD has been a major driving force for research in computational geometry, computer graphics, and discrete differential geometry. The design of models for object shapes, in particular, is occasionally called computer-aided geometric design. Eventually CAD provided the designer with the ability to perform engineering calculations, during this transition, calculations were still performed either by hand or by those individuals who could run computer programs. CAD was a change in the engineering industry, where draftsmen, designers. It did not eliminate departments, as much as it merged departments and empowered draftsman, CAD is just another example of the pervasive effect computers were beginning to have on industry. Current computer-aided design software packages range from 2D vector-based drafting systems to 3D solid, modern CAD packages can also frequently allow rotations in three dimensions, allowing viewing of a designed object from any desired angle, even from the inside looking out. Some CAD software is capable of mathematical modeling, in which case it may be marketed as CAD. CAD technology is used in the design of tools and machinery and in the drafting and design of all types of buildings and it can also be used to design objects. Furthermore, many CAD applications now offer advanced rendering and animation capabilities so engineers can better visualize their product designs, 4D BIM is a type of virtual construction engineering simulation incorporating time or schedule related information for project management. CAD has become an important technology within the scope of computer-aided technologies, with benefits such as lower product development costs. CAD enables designers to layout and develop work on screen, print it out, computer-aided design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question. Document management and revision control using Product Data Management, potential blockage of view corridors and shadow studies are also frequently analyzed through the use of CAD
Computer-aided design
–
Example: 3D CAD model
Computer-aided design
–
Example: 2D CAD drawing
Computer-aided design
–
CAD rendering of Sialk ziggurat based on archeological evidence
39.
Volume
–
Volume is the quantity of three-dimensional space enclosed by a closed surface, for example, the space that a substance or shape occupies or contains. Volume is often quantified numerically using the SI derived unit, the cubic metre, three dimensional mathematical shapes are also assigned volumes. Volumes of some simple shapes, such as regular, straight-edged, volumes of a complicated shape can be calculated by integral calculus if a formula exists for the shapes boundary. Where a variance in shape and volume occurs, such as those that exist between different human beings, these can be calculated using techniques such as the Body Volume Index. One-dimensional figures and two-dimensional shapes are assigned zero volume in the three-dimensional space, the volume of a solid can be determined by fluid displacement. Displacement of liquid can also be used to determine the volume of a gas, the combined volume of two substances is usually greater than the volume of one of the substances. However, sometimes one substance dissolves in the other and the volume is not additive. In differential geometry, volume is expressed by means of the volume form, in thermodynamics, volume is a fundamental parameter, and is a conjugate variable to pressure. Any unit of length gives a unit of volume, the volume of a cube whose sides have the given length. For example, a cubic centimetre is the volume of a cube whose sides are one centimetre in length, in the International System of Units, the standard unit of volume is the cubic metre. The metric system also includes the litre as a unit of volume, thus 1 litre =3 =1000 cubic centimetres =0.001 cubic metres, so 1 cubic metre =1000 litres. Small amounts of liquid are often measured in millilitres, where 1 millilitre =0.001 litres =1 cubic centimetre. Capacity is defined by the Oxford English Dictionary as the applied to the content of a vessel, and to liquids, grain, or the like. Capacity is not identical in meaning to volume, though closely related, Units of capacity are the SI litre and its derived units, and Imperial units such as gill, pint, gallon, and others. Units of volume are the cubes of units of length, in SI the units of volume and capacity are closely related, one litre is exactly 1 cubic decimetre, the capacity of a cube with a 10 cm side. In other systems the conversion is not trivial, the capacity of a fuel tank is rarely stated in cubic feet, for example. The density of an object is defined as the ratio of the mass to the volume, the inverse of density is specific volume which is defined as volume divided by mass. Specific volume is an important in thermodynamics where the volume of a working fluid is often an important parameter of a system being studied
Volume
–
A measuring cup can be used to measure volumes of liquids. This cup measures volume in units of cups, fluid ounces, and millilitres.
40.
Random-access memory
–
Random-access memory is a form of computer data storage which stores frequently used program instructions to increase the general speed of a system. A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the location of data inside the memory. RAM contains multiplexing and demultiplexing circuitry, to connect the lines to the addressed storage for reading or writing the entry. Usually more than one bit of storage is accessed by the same address, in todays technology, random-access memory takes the form of integrated circuits. RAM is normally associated with types of memory, where stored information is lost if power is removed. Other types of non-volatile memories exist that allow access for read operations. These include most types of ROM and a type of memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order it was written, drum memory could be expanded at relatively low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, and later, out of transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data, the first practical form of random-access memory was the Williams tube starting in 1947. It stored data as electrically charged spots on the face of a cathode ray tube, since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was smaller, faster. In fact, rather than the Williams tube memory being designed for the SSEM, magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a form of random-access memory, relying on an array of magnetized rings. By changing the sense of each rings magnetization, data could be stored with one bit stored per ring, since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible. Magnetic core memory was the form of memory system until displaced by solid-state memory in integrated circuits. Data was stored in the capacitance of each transistor, and had to be periodically refreshed every few milliseconds before the charge could leak away
Random-access memory
–
Example of writable volatile random-access memory: Synchronous Dynamic RAM modules, primarily used as main memory in personal computers, workstations, and servers.
Random-access memory
–
These IBM tabulating machines from the 1930s used mechanical counters to store information
Random-access memory
–
A portion of a core memory with a modern flash RAM SD card on top
Random-access memory
–
1 Megabit chip – one of the last models developed by VEB Carl Zeiss Jena in 1989
41.
Reynolds number
–
The Reynolds number is an important dimensionless quantity in fluid mechanics used to help predict flow patterns in different fluid flow situations. It has wide applications, ranging from liquid flow in a pipe to the passage of air over an aircraft wing. The concept was introduced by George Gabriel Stokes in 1851, but the Reynolds number was named by Arnold Sommerfeld in 1908 after Osborne Reynolds, who popularized its use in 1883. A similar effect is created by the introduction of a stream of higher velocity fluid and this relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, the Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. Such scaling is not linear and the application of Reynolds numbers to both situations allows scaling factors to be developed, the Reynolds number can be defined for several different situations where a fluid is in relative motion to a surface. These definitions generally include the properties of density and viscosity, plus a velocity. This dimension is a matter of convention – for example radius and diameter are equally valid to describe spheres or circles, for aircraft or ships, the length or width can be used. For flow in a pipe or a sphere moving in a fluid the internal diameter is used today. Other shapes such as pipes or non-spherical objects have an equivalent diameter defined. For fluids of variable density such as gases or fluids of variable viscosity such as non-Newtonian fluids. The velocity may also be a matter of convention in some circumstances, in practice, matching the Reynolds number is not on its own sufficient to guarantee similitude. Fluid flow is chaotic, and very small changes to shape. Nevertheless, Reynolds numbers are an important guide and are widely used. Osborne Reynolds famously studied the conditions in which the flow of fluid in pipes transitioned from laminar flow to turbulent flow, when the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the broke up at a given point. The point at which this happened was the point from laminar to turbulent flow. From these experiments came the dimensionless Reynolds number for dynamic similarity—the ratio of forces to viscous forces
Reynolds number
–
Sir George Stokes, introduced Reynolds numbers
Reynolds number
–
Osborne Reynolds popularised the concept
Reynolds number
–
The Moody diagram, which describes the Darcy–Weisbach friction factor f as a function of the Reynolds number and relative pipe roughness.
42.
High-resolution scheme
–
High-resolution schemes are used in the numerical solution of partial differential equations where high accuracy is required in the presence of shocks or discontinuities. They have the properties, Second or higher order spatial accuracy is obtained in smooth parts of the solution. Solutions are free from spurious oscillations or wiggles, high accuracy is obtained around shocks and discontinuities. The number of points containing the wave is small compared with a first-order scheme with similar accuracy. General methods are not adequate for accurate resolution of steep gradient phenomena. To avoid spurious or non-physical oscillations where shocks are present, schemes that exhibit a Total Variation Diminishing characteristic are especially attractive, two techniques that are proving to be particularly effective are MUSCL a flux/slope limiter method and the WENO method. Both methods are referred to as high resolution schemes. MUSCL methods are generally second-order accurate in regions and provide good resolution. They are straight-forward to implement and are computationally efficient, for problems comprising both shocks and complex smooth solution structure, WENO schemes can provide higher accuracy than second-order schemes along with good resolution around discontinuities. Most applications tend to use a fifth order accurate WENO scheme, Godunovs theorem Sergei K. Godunov Total variation diminishing Shock capturing methods Godunov, Sergei K. Ph. D. Dissertation, Different Methods for Shock Waves, Moscow State University, a Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, Math. Sbornik,47, 271-306, translated US Joint Publ, high Resolution Schemes for Hyperbolic Conservation Laws. Numerical Computation of Internal and External Flows, vol 2, Wiley, computational Gas Dynamics, Cambridge University Press. Essentially Non-oscillatory and Weighted Essential Non-oscillatory Schemes for Hyperbolic Conservation Laws, advanced Numerical Approximation of Nonlinear Hyperbolic Equations, Lecture Notes in Mathematics, vol 1697. High Order Weighted Essentially Non-oscillatory Schemes for Convection Dominated Problems, SIAM Review,51, computational Fluid mechanics and Heat Transfer, 2nd Ed. Taylor and Francis. Toro, E. F. Riemann Solvers and Numerical Methods for Fluid Dynamics, towards the ultimate conservative difference scheme V. A second order sequel to Godunovs method
High-resolution scheme
–
Typical high-resolution scheme based on MUSCL reconstruction.
43.
Total variation diminishing
–
In numerical methods, total variation diminishing is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten, a numerical method is said to be total variation diminishing if, T V ≤ T V. A numerical scheme is said to be monotonicity preserving if the properties are maintained, If u n is monotonically increasing in space. Harten 1983 proved the following properties for a scheme, A monotone scheme is TVD. In Computational Fluid Dynamics, TVD scheme is employed to capture sharper shock predictions without any misleading oscillations when variation of field variable “Ø” is discontinuous, to capture the variation fine grids are needed and the computation becomes heavy and therefore uneconomic. The use of coarse grids with central difference scheme, upwind scheme, hybrid difference scheme, TVD scheme enables sharper shock predictions on coarse grids saving computation time and as the scheme preserves monotonicity there are no spurious oscillations in the solution. Note that f + is the function when the flow is in positive direction i. e. from left to right. So, f r + is a function of ϕ P − ϕ L ϕ R − ϕ L. Likewise when the flow is in direction, P is negative. Monotone schemes are attractive for solving engineering and scientific problems because they do not produce non-physical solutions, Godunovs theorem proves that linear schemes which preserve monotonicity are, at most, only first order accurate. Higher order linear schemes, although more accurate for smooth solutions, are not TVD, to overcome these drawbacks, various high-resolution, non-linear techniques have been developed, often using flux/slope limiters. Flux limiters Godunovs theorem High-resolution scheme MUSCL scheme Sergei K. Godunov Total variation Hirsch, Numerical Computation of Internal and External Flows, Vol 2, Wiley. Computational Gas Dynamics, Cambridge University Press, toro, E. F. Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag. Anderson, D. A. and Pletcher, R. H. Computational Fluid Mechanics and Heat Transfer, principles of Computational Fluid Dynamics, Springer-Verlag. Anil W. Date Introduction to Computational Fluid Dynamics, Cambridge University Press
Total variation diminishing
–
A picture showing the control volume with velocities at the faces,nodes and the distance between them, where 'P' is the node at the center.
44.
Ludwig Prandtl
–
Ludwig Prandtl was a German engineer. In the 1920s he developed the basis for the fundamental principles of subsonic aerodynamics in particular. His studies identified the boundary layer, thin-airfoils, and lifting-line theories, the Prandtl number was named after him. Prandtl was born in Freising, near Munich, in 1875 and his mother suffered from a lengthy illness and, as a result, Ludwig spent more time with his father, a professor of engineering. His father also encouraged him to nature and think about his observations. He entered the Technische Hochschule Munich in 1894 and graduated with a Ph. D. under guidance of Professor August Foeppl in six years and his work at Munich had been in solid mechanics, and his first job was as an engineer designing factory equipment. There, he entered the field of fluid mechanics where he had to design a suction device, after carrying out some experiments, he came up with a new device that worked well and used less power than the one it replaced. In 1901 Prandtl became a professor of mechanics at the technical school in Hannover. It was here that he developed many of his most important theories, in 1904 he delivered a groundbreaking paper, Fluid Flow in Very Little Friction, in which he described the boundary layer and its importance for drag and streamlining. The paper also described flow separation as a result of the boundary layer, several of his students made attempts at closed-form solutions, but failed, and in the end the approximation contained in his original paper remains in widespread use. The effect of the paper was so great that Prandtl became director of the Institute for Technical Physics at the University of Göttingen later in the year, over the next decades he developed it into a powerhouse of aerodynamics, leading the world until the end of World War II. In 1925 the university spun off his arm to create the Kaiser Wilhelm Institute for Flow Research. Following earlier leads by Frederick Lanchester from 1902–1907, Prandtl worked with Albert Betz, the results were published in 1918–1919, known as the Lanchester-Prandtl wing theory. He also made additions to study cambered airfoils, like those on World War I aircraft. This work led to the realization that on any wing of finite length, wing-tip effects became very important to the overall performance, considerable work was included on the nature of induced drag and wingtip vortices, which had previously been ignored. These tools enabled aircraft designers to make meaningful theoretical studies of their aircraft before they were built, Prandtl and his student Theodor Meyer developed the first theories of supersonic shock waves and flow in 1908. The Prandtl-Meyer expansion fans allowed for the construction of wind tunnels. He had little time to work on the problem further until the 1920s, today, all supersonic wind tunnels and rocket nozzles are designed using the same method
Ludwig Prandtl
–
Ludwig Prandtl
Ludwig Prandtl
–
Ludwig Prandtl 1904 with his fluid test channel
45.
Vortex stretching
–
Vortex stretching is associated with a particular term in the vorticity equation. For example, vorticity transport in an inviscid flow is governed by D ω → D t = v →. The source term on the hand side is the vortex stretching term. It amplifies the vorticity ω → when the velocity is diverging in the parallel to ω →. A simple example of stretching in a viscous flow is provided by the Burgers vortex. Vortex stretching is at the core of the description of the energy cascade from the large scales to the small scales in turbulence. In general, in turbulence fluid elements are more lengthened than squeezed, in the end, this results in more vortex stretching than vortex squeezing. For incompressible flow—due to volume conservation of fluid elements—the lengthening implies thinning of the elements in the directions perpendicular to the stretching direction. This reduces the length scale of the associated vorticity. Finally, at the scales of the order of the Kolmogorov microscales. Vorticity and turbulence, Springer, ISBN 0-387-94197-5 Tennekes, H. Lumley, a First Course in Turbulence, Cambridge, MA, MIT Press, ISBN 0-262-20019-8
Vortex stretching
–
Studies of vortices in turbulent fluid motion by Leonardo da Vinci.
46.
Kinetic theory of gases
–
Kinetic theory explains macroscopic properties of gases, such as pressure, temperature, viscosity, thermal conductivity, and volume, by considering their molecular composition and motion. The theory posits that gas pressure is due to the impacts, on the walls of a container, Kinetic theory defines temperature in its own way, not identical with the thermodynamic definition. Under a microscope, the making up a liquid are too small to be visible. Known as Brownian motion, it directly from collisions between the grains or particles and liquid molecules. As analyzed by Albert Einstein in 1907, this evidence for kinetic theory is generally seen as having confirmed the concrete material existence of atoms. The theory for ideal gases makes the assumptions, The gas consists of very small particles known as molecules. This smallness of their size is such that the volume of the individual gas molecules added up is negligible compared to the volume of the smallest open ball containing all the molecules. This is equivalent to stating that the distance separating the gas particles is large compared to their size. These particles have the same mass, the number of molecules is so large that statistical treatment can be applied. These molecules are in constant, random, and rapid motion, the rapidly moving particles constantly collide among themselves and with the walls of the container. All these collisions are perfectly elastic and this means, the molecules are considered to be perfectly spherical in shape, and elastic in nature. Except during collisions, the interactions among molecules are negligible and this means that the inter-particle distance is much larger than the thermal de Broglie wavelength and the molecules are treated as classical objects. Because of the two, their dynamics can be treated classically. This means that the equations of motion of the molecules are time-reversible, the average kinetic energy of the gas particles depends only on the absolute temperature of the system. The kinetic theory has its own definition of temperature, not identical with the thermodynamic definition, the elapsed time of a collision between a molecule and the containers wall is negligible when compared to the time between successive collisions. Because they have mass, the gas molecules will be affected by gravity, more modern developments relax these assumptions and are based on the Boltzmann equation. These can accurately describe the properties of gases, because they include the volume of the molecules. The necessary assumptions are the absence of quantum effects, molecular chaos, expansions to higher orders in the density are known as virial expansions
Kinetic theory of gases
–
Hydrodynamica front cover
Kinetic theory of gases
–
The temperature of an ideal monatomic gas is proportional to the average kinetic energy of its atoms. The size of helium atoms relative to their spacing is shown to scale under 1950 atmospheres of pressure. The atoms have a certain, average speed, slowed down here two trillion fold from room temperature.
47.
Langevin equation
–
In statistical physics, a Langevin equation is a stochastic differential equation describing the time evolution of a subset of the degrees of freedom. These degrees of freedom typically are collective variables changing only slowly in comparison to the variables of the system. The fast variables are responsible for the nature of the Langevin equation. The degree of freedom of interest here is the x of the particle. The δ-function form of the correlations in time means that the force at a t is assumed to be completely uncorrelated with it at any other time. This is an approximation, the random force has a nonzero correlation time corresponding to the collision time of the molecules. However, the Langevin equation is used to describe the motion of a particle at a much longer time scale, and in this limit the δ -correlation. Another prototypical feature of the Langevin equation is the occurrence of the damping coefficient λ in the function of the random force. A strictly δ -correlated fluctuating force η isnt a function in the mathematical sense. The Langevin equation as it requires an interpretation in this case. There is a derivation of a generic Langevin equation from classical mechanics. This generic equation plays a role in the theory of critical dynamics. The equation for Brownian motion above is a special case, an essential condition of the derivation is a criterion dividing the degrees of freedom into the categories slow and fast. For example, local thermodynamic equilibrium in a liquid is reached within a few collision times, but it takes much longer for densities of conserved quantities like mass and energy to relax to equilibrium. Densities of conserved quantities, and in particular their long wavelength components, technically this division is realized with the Zwanzig projection operator, the essential tool in the derivation. The derivation is not completely rigorous because it relies on assumptions akin to assumptions required elsewhere in basic statistical mechanics, let A = denote the slow variables. The fluctuating force η i obeys a Gaussian probability distribution with correlation function ⟨ η i η j ⟩ =2 λ i, j δ and this implies the Onsager reciprocity relation λ i, j = λ j, i for the damping coefficients λ. The dependence d λ i, j / d A j of λ on A is negligible in most cases, in the Brownian motion case one would have H = p 2 /, A = or A = and = δ i, j
Langevin equation
–
Phase portrait of a harmonic oscillator showing spreading due to the Langevin Equation.
48.
N-body problem
–
In physics, the n-body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem has been motivated by the desire to understand the motions of the Sun, Moon, planets, in the 20th century, understanding the dynamics of globular cluster star systems became an important n-body problem. The n-body problem in general relativity is more difficult to solve. Having done so, he and others soon discovered over the course of a few years, Newton realized it was because gravitational interactive forces amongst all the planets was affecting all their orbits. Thus came the awareness and rise of the problem in the early 17th century. Ironically, this conformity led to the wrong approach, after Newtons time the n-body problem historically was not stated correctly because it did not include a reference to those gravitational interactive forces. Newton does not say it directly but implies in his Principia the n-body problem is unsolvable because of gravitational interactive forces. Newton said in his Principia, paragraph 21, And hence it is that the force is found in both bodies. The Sun attracts Jupiter and the planets, Jupiter attracts its satellites. Two bodies can be drawn to other by the contraction of rope between them. Newton concluded via his third law of motion according to this Law all bodies must attract each other. This last statement, which implies the existence of gravitational forces, is key. The problem of finding the solution of the n-body problem was considered very important. Indeed, in the late 19th century King Oscar II of Sweden, advised by Gösta Mittag-Leffler, in case the problem could not be solved, any other important contribution to classical mechanics would then be considered to be prizeworthy. The prize was awarded to Poincaré, even though he did not solve the original problem, the version finally printed contained many important ideas which led to the development of chaos theory. The problem as stated originally was finally solved by Karl Fritiof Sundman for n =3. The n-body problem considers n point masses mi, i =1,2, …, n in a reference frame in three dimensional space ℝ3 moving under the influence of mutual gravitational attraction. Each mass mi has a position vector qi, Newtons second law says that mass times acceleration mi d2qi/dt2 is equal to the sum of the forces on the mass
N-body problem
–
The Real Motion v.s. Kepler's Apparent Motion
N-body problem
–
Restricted 3-Body Problem
49.
Two-phase flow
–
In fluid mechanics, two-phase flow is a flow of gas and liquid usually in a pipe. Two-phase flow is an example of multiphase flow. Two-phase flow can occur in various forms, the widely-accepted method to categorize two-phase flows is to consider the velocity of each phase as if there is not other phases available. The parameter is a concept called Superficial velocity. Historically, probably the most commonly studied cases of two-phase flow are in power systems. Coal and gas-fired power stations used very large boilers to produce steam for use in turbines, in such cases, pressurised water is passed through heated pipes and it changes to steam as it moves through the pipe. The design of boilers requires an understanding of two-phase flow heat-transfer and pressure drop behaviour. Even more critically, nuclear reactors use water to heat from the reactor core using two-phase flow. A great deal of study has been performed on the nature of flow in such cases, so that engineers can design against possible failures in pipework, loss of pressure. Another case where two-phase flow can occur is in pump cavitation, here a pump is operating close to the vapor pressure of the fluid being pumped. If pressure drops further, which can happen locally near the vanes for the pump, for example, then a change can occur. Similar effects can occur on marine propellors, wherever it occurs. When the vapor bubble collapses, it can produce very large pressure spikes, the above two-phase flow cases are for a single fluid occurring by itself as two different phases, such as steam and water. The term two-phase flow is applied to mixtures of different fluids having different phases, such as air and water, or oil. Sometimes even three-phase flow is considered, such as in oil, other interesting areas where two-phase flow is studied includes in climate systems such as clouds, and in groundwater flow, in which the movement of water and air through the soil is studied. Other examples of two-phase flow include bubbles, rain, waves on the sea, foam, fountains, mousse, cryogenics, several features make two-phase flow an interesting and challenging branch of fluid mechanics, Surface tension makes all dynamical problems nonlinear. In the case of air and water at temperature and pressure. Similar differences are typical of water liquid/water vapor densities, the sound speed changes dramatically for materials undergoing phase change, and can be orders of magnitude different
Two-phase flow
–
Different modes of two-phase flows.
50.
Ordinary differential equations
–
In mathematics, an ordinary differential equation is a differential equation containing one or more functions of one independent variable and its derivatives. The term ordinary is used in contrast with the partial differential equation which may be with respect to more than one independent variable. ODEs that are linear equations have exact closed-form solutions that can be added and multiplied by coefficients. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, Ordinary differential equations arise in many contexts of mathematics and science. Mathematical descriptions of change use differentials and derivatives, often, quantities are defined as the rate of change of other quantities, or gradients of quantities, which is how they enter differential equations. Specific mathematical fields include geometry and analytical mechanics, scientific fields include much of physics and astronomy, meteorology, chemistry, biology, ecology and population modelling, economics. Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, dAlembert, in general, F is a function of the position x of the particle at time t. The unknown function x appears on both sides of the equation, and is indicated in the notation F. In what follows, let y be a dependent variable and x an independent variable, the notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. Given F, a function of x, y, and derivatives of y, then an equation of the form F = y is called an explicit ordinary differential equation of order n. The function r is called the term, leading to two further important classifications, Homogeneous If r =0, and consequently one automatic solution is the trivial solution. The solution of a homogeneous equation is a complementary function. The additional solution to the function is the particular integral. The general solution to an equation can be written as y = yc + yp. Non-linear A differential equation that cannot be written in the form of a linear combination, a number of coupled differential equations form a system of equations. In column vector form, = These are not necessarily linear, the implicit analogue is, F =0 where 0 = is the zero vector. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations and this distinction is not merely one of terminology, DAEs have fundamentally different characteristics and are generally more involved to solve than ODE systems. Given a differential equation F =0 a function u, I ⊂ R → R is called the solution or integral curve for F, if u is n-times differentiable on I, and F =0 x ∈ I
Ordinary differential equations
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
51.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
52.
Dortmund University of Technology
–
TU Dortmund University is a university in Dortmund, North Rhine-Westphalia, Germany with over 30,000 students, and over 3,000 staff. It is situated in the Ruhr area, the fourth largest urban area in Europe, the university is highly ranked in terms of its research performance in the areas of physics, electrical engineering, chemistry and economics. The University of Dortmund was founded in 1968, during the decline of the coal and its establishment was seen as an important move in the economic change from heavy industry to technology. The universitys main areas of research are the sciences, engineering, pedagogy/teacher training in a wide spectrum of subjects, special education. The University of Dortmund was originally designed to be a technical university, in 2006, The University of Dortmund hosted the 11th Federation of International Robot-soccer Association RoboWorld Cup. The universitys robot soccer team, the Dortmund Droids, became world champion in the RoboWorld Cup 2002. Following the Zeitgeist of the late 1960s in Germany, the university was built auf der grünen Wiese about 2 miles outside of downtown Dortmund, one of the most prominent buildings in the university is the Mathetower, which houses the faculty of Mathematics. The first point of registration for. de-domains was at the Dortmund University Department of Computer Science, former president of Germany, Johannes Rau was awarded an honorary degree from the university in 2004. Carl Djerassi was awarded an honorary degree for his science-in-fiction in 2009. ESDP-Network ConRuhr Official homepage of the TU Dortmund University
Dortmund University of Technology
–
Dortmund University's Mathetower
Dortmund University of Technology
–
Official logo of the TU Dortmund University
Dortmund University of Technology
–
Student hostels
Dortmund University of Technology
–
Campus Food Court
53.
Computational fluid dynamics
–
Computational fluid dynamics is a branch of fluid mechanics that uses numerical analysis and data structures to solve and analyze problems that involve fluid flows. Computers are used to perform the required to simulate the interaction of liquids. With high-speed supercomputers, better solutions can be achieved, ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial experimental validation of such software is performed using a tunnel with the final validation coming in full-scale testing. The fundamental basis of almost all CFD problems is the Navier–Stokes equations and these equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations, finally, for small perturbations in subsonic and supersonic flows these equations can be linearized to yield the linearized potential equations. Historically, methods were first developed to solve the potential equations. Two-dimensional methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s. One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardsons book Weather prediction by numerical process, set the basis for modern CFD, in fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardsons 1922 book. The computer power available paced development of three-dimensional methods, probably the first work using computers to model fluid flow, as governed by the Navier-Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is considered as one of the pioneers of CFD. Fromms vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world, the first paper with three-dimensional model was published by John Hess and A. M. O. Smith of Douglas Aircraft in 1967 and this method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls, the first lifting Panel Code was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing, Lockheed, Douglas, McDonnell Aircraft, NASA, some were higher order codes, using higher order distributions of surface singularities, while others used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time, today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of submarines, surface ships, automobiles, helicopters, aircraft
Computational fluid dynamics
–
Computational physics
Computational fluid dynamics
–
A computer simulation of high velocity air flow around the Space Shuttle during re-entry.
Computational fluid dynamics
–
A simulation of the Hyper-X scramjet vehicle in operation at Mach -7
Computational fluid dynamics
–
Volume rendering of a non-premixed swirl flame as simulated by LES.