Radar is a detection system that uses radio waves to determine the range, angle, or velocity of objects. It can be used to detect aircraft, spacecraft, guided missiles, motor vehicles, weather formations, terrain. A radar system consists of a transmitter producing electromagnetic waves in the radio or microwaves domain, a transmitting antenna, a receiving antenna and a receiver and processor to determine properties of the object. Radio waves from the transmitter reflect off the object and return to the receiver, giving information about the object's location and speed. Radar was developed secretly for military use by several nations in the period before and during World War II. A key development was the cavity magnetron in the UK, which allowed the creation of small systems with sub-meter resolution; the term RADAR was coined in 1940 by the United States Navy as an acronym for RAdio Detection And Ranging The term radar has since entered English and other languages as a common noun, losing all capitalization.
The modern uses of radar are diverse, including air and terrestrial traffic control, radar astronomy, air-defense systems, antimissile systems, marine radars to locate landmarks and other ships, aircraft anticollision systems, ocean surveillance systems, outer space surveillance and rendezvous systems, meteorological precipitation monitoring and flight control systems, guided missile target locating systems, ground-penetrating radar for geological observations, range-controlled radar for public health surveillance. High tech radar systems are associated with digital signal processing, machine learning and are capable of extracting useful information from high noise levels. Radar is a key technology that the self-driving systems are designed to use, along with sonar and other sensors. Other systems similar to radar make use of other parts of the electromagnetic spectrum. One example is "lidar". With the emergence of driverless vehicles, Radar is expected to assist the automated platform to monitor its environment, thus preventing unwanted incidents.
As early as 1886, German physicist Heinrich Hertz showed that radio waves could be reflected from solid objects. In 1895, Alexander Popov, a physics instructor at the Imperial Russian Navy school in Kronstadt, developed an apparatus using a coherer tube for detecting distant lightning strikes; the next year, he added a spark-gap transmitter. In 1897, while testing this equipment for communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, but he did nothing more with this observation; the German inventor Christian Hülsmeyer was the first to use radio waves to detect "the presence of distant metallic objects". In 1904, he demonstrated the feasibility of detecting a ship in dense fog, but not its distance from the transmitter, he obtained a patent for his detection device in April 1904 and a patent for a related amendment for estimating the distance to the ship.
He got a British patent on September 23, 1904 for a full radar system, that he called a telemobiloscope. It operated on a 50 cm wavelength and the pulsed radar signal was created via a spark-gap, his system used the classic antenna setup of horn antenna with parabolic reflector and was presented to German military officials in practical tests in Cologne and Rotterdam harbour but was rejected. In 1915, Robert Watson-Watt used radio technology to provide advance warning to airmen and during the 1920s went on to lead the U. K. research establishment to make many advances using radio techniques, including the probing of the ionosphere and the detection of lightning at long distances. Through his lightning experiments, Watson-Watt became an expert on the use of radio direction finding before turning his inquiry to shortwave transmission. Requiring a suitable receiver for such studies, he told the "new boy" Arnold Frederic Wilkins to conduct an extensive review of available shortwave units. Wilkins would select a General Post Office model after noting its manual's description of a "fading" effect when aircraft flew overhead.
Across the Atlantic in 1922, after placing a transmitter and receiver on opposite sides of the Potomac River, U. S. Navy researchers A. Hoyt Taylor and Leo C. Young discovered that ships passing through the beam path caused the received signal to fade in and out. Taylor submitted a report, suggesting that this phenomenon might be used to detect the presence of ships in low visibility, but the Navy did not continue the work. Eight years Lawrence A. Hyland at the Naval Research Laboratory observed similar fading effects from passing aircraft. Before the Second World War, researchers in the United Kingdom, Germany, Japan, the Netherlands, the Soviet Union, the United States, independently and in great secrecy, developed technologies that led to the modern version of radar. Australia, New Zealand, South Africa followed prewar Great Britain's radar development, Hungary generated its radar technology during the war. In France in 1934, following systematic studies on the split-anode magnetron, the research branch of the Compagnie Générale de Télégraphie Sans Fil headed by Maurice Ponte with Henri Gutton, Sylvain Berline and M. Hugon, began developing an obstacle-locatin
National Geospatial-Intelligence Agency
The National Geospatial-Intelligence Agency is a combat support agency under the United States Department of Defense and a member of the United States Intelligence Community, with the primary mission of collecting and distributing geospatial intelligence in support of national security. NGA was known as the National Imagery and Mapping Agency until 2003. NGA headquarters known as NGA Campus East, is located at Fort Belvoir North Area in Virginia; the agency operates major facilities in the St. Louis, Missouri area, as well as support and liaison offices worldwide; the NGA headquarters, at 2.3 million square feet, is the third-largest government building in the Washington metropolitan area after The Pentagon and the Ronald Reagan Building. In addition to using GEOINT for U. S. military and intelligence efforts, the NGA provides assistance during natural and man-made disasters, security planning for major events such as the Olympic Games. In September 2018, researchers at the National Geospatial-Intelligence Agency released a high resolution terrain map of Antarctica, named the "Reference Elevation Model of Antarctica".
U. S. mapping and charting efforts remained unchanged until World War I, when aerial photography became a major contributor to battlefield intelligence. Using stereo viewers, photo-interpreters reviewed thousands of images. Many of these were of the same target at different angles and times, giving rise to what became modern imagery analysis and mapmaking; the Engineer Reproduction Plant was the Army Corps of Engineers's first attempt to centralize mapping production and distribution. It was located on the grounds of the Army War College in Washington, D. C. Topographic mapping had been a function of individual field engineer units using field surveying techniques or copying existing or captured products. In addition, ERP assumed the "supervision and maintenance" of the War Department Map Collection, effective April 1, 1939. With the advent of the Second World War aviation, field surveys began giving way to photogrammetry, photo interpretation, geodesy. During wartime, it became possible to compile maps with minimal field work.
Out of this emerged AMS, which absorbed the existing ERP in May 1942. It was located at the Dalecarlia Site on MacArthur Blvd. just outside Washington, D. C. in Montgomery County and adjacent to the Dalecarlia Reservoir. AMS was designated as an Engineer field activity, effective July 1, 1942, by General Order 22, OCE, June 19, 1942; the Army Map Service combined many of the Army's remaining geographic intelligence organizations and the Engineer Technical Intelligence Division. AMS was redesignated the U. S. Army Topographic Command on September 1, 1968, continued as an independent organization until 1972, when it was merged into the new Defense Mapping Agency and redesignated as the DMA Topographic Center; the agency's credit union, Constellation Federal Credit Union, was chartered during the Army Map Service era, in 1944. It has continued to serve all successive legacy their families. After the war, as airplane capacity and range improved, the need for charts grew; the Army Air Corps established its map unit, renamed ACP in 1943 and was located in St. Louis, Missouri.
ACP was known as the U. S. Air Force Aeronautical Chart and Information Center from 1952 to 1972. A credit union was chartered for the ACP in 1948, called Aero Chart Credit Union, it was renamed Arsenal Credit Union in 1952, a nod to the St. Louis site's Civil War-era use as an arsenal. Shortly before leaving office in January 1961, President Dwight D. Eisenhower authorized the creation of the National Photographic Interpretation Center, a joint project of the CIA and US DoD. NPIC was a component of the CIA's Directorate of Science and Technology and its primary function was imagery analysis. NPIC became part of the National Imagery and Mapping Agency in 1996. NPIC first identified the Soviet Union's basing of missiles in Cuba in 1962. By exploiting images from U-2 overflights and film from canisters ejected by orbiting Corona s, NPIC analysts developed the information necessary to inform U. S. influence operations during the Cuban Missile Crisis. Their analysis garnered worldwide attention when the Kennedy Administration declassified and made public a portion of the images depicting the Soviet missiles on Cuban soil.
The Defense Mapping Agency was created on January 1, 1972, to consolidate all U. S. military mapping activities. DMA's "birth certificate", DoD Directive 5105.40, resulted from a classified Presidential directive, "Organization and Management of the U. S. Foreign Intelligence Community", which directed the consolidation of mapping functions dispersed among the military services. DMA became operational on July 1, 1972, pursuant to General Order 3, DMA. On Oct. 1, 1996, DMA was folded into the National Imagery and Mapping Agency – which became NGA. DMA was first headquartered at the United States Naval Observatory in Washington, D. C at Falls Church, Virginia, its civilian workforce was concentrated at production sites in Bethesda, Northern Virginia, St. Louis, Missouri. DMA was formed from the Mapping and Geodesy Division, Defense Intelligence Agency, from various mapping-related organizations of the military services. DMA Hydrographic Center DMAHC was formed in
World Geodetic System
The World Geodetic System is a standard for use in cartography and satellite navigation including GPS. This standard includes the definition of the coordinate system's fundamental and derived constants, the ellipsoidal Earth Gravitational Model, a description of the associated World Magnetic Model, a current list of local datum transformations; the latest revision is WGS 84, established in 1984 and last revised in 2004. Earlier schemes included WGS 72, WGS 66, WGS 60. WGS 84 is the reference coordinate system used by the Global Positioning System; the coordinate origin of WGS 84 is meant to be located at the Earth's center of mass. The WGS 84 meridian of zero longitude is the IERS Reference Meridian, 5.3 arc seconds or 102 metres east of the Greenwich meridian at the latitude of the Royal Observatory. The WGS 84 datum surface is an oblate spheroid with equatorial radius a = 6378137 m at the equator and flattening f = 1/298.257223563. The polar semi-minor axis b equals a × = 6356752.3142 m. WGS 84 uses the Earth Gravitational Model 2008.
This geoid defines the nominal sea level surface by means of a spherical harmonics series of degree 360. The deviations of the EGM96 geoid from the WGS 84 reference ellipsoid range from about −105 m to about +85 m. EGM96 differs from the original WGS 84 geoid, referred to as EGM84. WGS 84 uses the World Magnetic Model 2015v2; the new version of WMM 2015 became necessary due to extraordinarily large and erratic movements of the north magnetic pole. The next regular update will occur in late 2019. Efforts to supplement the various national surveying systems began in the 19th century with F. R. Helmert's famous book Mathematische und Physikalische Theorien der Physikalischen Geodäsie. Austria and Germany founded the Zentralbüro für die Internationale Erdmessung, a series of global ellipsoids of the Earth were derived. A unified geodetic system for the whole world became essential in the 1950s for several reasons: International space science and the beginning of astronautics; the lack of inter-continental geodetic information.
The inability of the large geodetic systems, such as European Datum, North American Datum, Tokyo Datum, to provide a worldwide geo-data basis Need for global maps for navigation and geography. Western Cold War preparedness necessitated a standardised, NATO-wide geospatial reference system, in accordance with the NATO Standardisation AgreementIn the late 1950s, the United States Department of Defense, together with scientists of other institutions and countries, began to develop the needed world system to which geodetic data could be referred and compatibility established between the coordinates of separated sites of interest. Efforts of the U. S. Army and Air Force were combined leading to the DoD World Geodetic System 1960; the term datum as used here refers to a smooth surface somewhat arbitrarily defined as zero elevation, consistent with a set of surveyor's measures of distances between various stations, differences in elevation, all reduced to a grid of latitudes and elevations. Heritage surveying methods found elevation differences from a local horizontal determined by the spirit level, plumb line, or an equivalent device that depends on the local gravity field.
As a result, the elevations in the data are referenced to the geoid, a surface, not found using satellite geodesy. The latter observational method is more suitable for global mapping. Therefore, a motivation, a substantial problem in the WGS and similar work is to patch together data that were not only made separately, for different regions, but to re-reference the elevations to an ellipsoid model rather than to the geoid. In accomplishing WGS 60, a combination of available surface gravity data, astro-geodetic data and results from HIRAN and Canadian SHORAN surveys were used to define a best-fitting ellipsoid and an earth-centered orientation for each of selected datum; the sole contribution of satellite data to the development of WGS 60 was a value for the ellipsoid flattening, obtained from the nodal motion of a satellite. Prior to WGS 60, the U. S. Army and U. S. Air Force had each developed a world system by using different approaches to the gravimetric datum orientation method. To determine their gravimetric orientation parameters, the Air Force used the mean of the differences between the gravimetric and astro-geodetic deflections and geoid heights at selected stations in the areas of the major datums.
The Army performed an adjustment to minimize the difference between astro-geodetic and gravimetric geoids. By matching the relative astro-geodetic geoids of the selected datums with an earth-centered gravimetric geoid, the selected datums were reduced to an earth-centered orientation. Since the Army and Air Force systems agreed remarkably well for the NAD, ED and TD areas, they were consolidated and became WGS 60. Improvements to the global system included the Astrogeoid of Irene Fischer and the astronautic Mercury datum. In January 1966, a World Geodetic System Committee composed of representatives from the United States Army and Air Force was charged with developing an improved WGS, needed to satisfy mapping and geodetic requirements. Additional surface gravity observa
An aircraft is a machine, able to fly by gaining support from the air. It counters the force of gravity by using either static lift or by using the dynamic lift of an airfoil, or in a few cases the downward thrust from jet engines. Common examples of aircraft include airplanes, airships and hot air balloons; the human activity that surrounds aircraft is called aviation. The science of aviation, including designing and building aircraft, is called aeronautics. Crewed aircraft are flown by an onboard pilot, but unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as lift type, aircraft propulsion and others. Flying model craft and stories of manned flight go back many centuries, however the first manned ascent – and safe descent – in modern times took place by larger hot-air balloons developed in the 18th century; each of the two World Wars led to great technical advances. The history of aircraft can be divided into five eras: Pioneers of flight, from the earliest experiments to 1914.
First World War, 1914 to 1918. Aviation between the World Wars, 1918 to 1939. Second World War, 1939 to 1945. Postwar era called the jet age, 1945 to the present day. Aerostats use buoyancy to float in the air in much the same way, they are characterized by one or more large gasbags or canopies, filled with a low-density gas such as helium, hydrogen, or hot air, less dense than the surrounding air. When the weight of this is added to the weight of the aircraft structure, it adds up to the same weight as the air that the craft displaces. Small hot-air balloons called sky lanterns were first invented in ancient China prior to the 3rd century BC and used in cultural celebrations, were only the second type of aircraft to fly, the first being kites which were first invented in ancient China over two thousand years ago. A balloon was any aerostat, while the term airship was used for large, powered aircraft designs – fixed-wing. In 1919 Frederick Handley Page was reported as referring to "ships of the air," with smaller passenger types as "Air yachts."
In the 1930s, large intercontinental flying boats were sometimes referred to as "ships of the air" or "flying-ships". – though none had yet been built. The advent of powered balloons, called dirigible balloons, of rigid hulls allowing a great increase in size, began to change the way these words were used. Huge powered aerostats, characterized by a rigid outer framework and separate aerodynamic skin surrounding the gas bags, were produced, the Zeppelins being the largest and most famous. There were still no fixed-wing aircraft or non-rigid balloons large enough to be called airships, so "airship" came to be synonymous with these aircraft. Several accidents, such as the Hindenburg disaster in 1937, led to the demise of these airships. Nowadays a "balloon" is an unpowered aerostat and an "airship" is a powered one. A powered, steerable aerostat is called a dirigible. Sometimes this term is applied only to non-rigid balloons, sometimes dirigible balloon is regarded as the definition of an airship.
Non-rigid dirigibles are characterized by a moderately aerodynamic gasbag with stabilizing fins at the back. These soon became known as blimps. During the Second World War, this shape was adopted for tethered balloons; the nickname blimp was adopted along with the shape. In modern times, any small dirigible or airship is called a blimp, though a blimp may be unpowered as well as powered. Heavier-than-air aircraft, such as airplanes, must find some way to push air or gas downwards, so that a reaction occurs to push the aircraft upwards; this dynamic movement through the air is the origin of the term aerodyne. There are two ways to produce dynamic upthrust: aerodynamic lift, powered lift in the form of engine thrust. Aerodynamic lift involving wings is the most common, with fixed-wing aircraft being kept in the air by the forward movement of wings, rotorcraft by spinning wing-shaped rotors sometimes called rotary wings. A wing is a flat, horizontal surface shaped in cross-section as an aerofoil. To fly, air must generate lift.
A flexible wing is a wing made of fabric or thin sheet material stretched over a rigid frame. A kite is tethered to the ground and relies on the speed of the wind over its wings, which may be flexible or rigid, fixed, or rotary. With powered lift, the aircraft directs its engine thrust vertically downward. V/STOL aircraft, such as the Harrier Jump Jet and F-35B take off and land vertically using powered lift and transfer to aerodynamic lift in steady flight. A pure rocket is not regarded as an aerodyne, because it does not depend on the air for its lift. Rocket-powered missiles that obtain aerodynamic lift at high speed due to airflow over their bodies are a marginal case; the forerunner of the fixed-wing aircraft is the kite. Whereas a fixed-wing aircraft relies on its forward speed to create airflow over the wings, a kite is tethered to the ground and relies on the wind blowing over its wings to provide lift. Kites were the first kind of aircraft to fly, were invented in China around 500 BC.
Much aerodynamic research was done with kites before test aircraft, wind tunnels, computer modelling programs became available. The first heavier-than-air craft capable of controlled free-flight were gliders. A glider designed by Geo
Digital elevation model
A digital elevation model is a 3D CG representation of a terrain's surface – of a planet, moon, or asteroid – created from a terrain's elevation data. A "global DEM" refers to a Discrete Global Grid. DEMs are used in geographic information systems, are the most common basis for digitally produced relief maps. While a DSM may be useful for landscape modeling, city modeling and visualization applications, a DTM is required for flood or drainage modeling, land-use studies, geological applications, other applications, in planetary science. There is no universal usage of the terms digital elevation model, digital terrain model and digital surface model in scientific literature. In most cases the term digital surface model represents the earth's surface and includes all objects on it. In contrast to a DSM, the digital terrain model represents the bare ground surface without any objects like plants and buildings. DEM is used as a generic term for DSMs and DTMs, only representing height information without any further definition about the surface.
Other definitions equalise the terms DEM and DTM, equalise the terms DEM and DSM, define the DEM as a subset of the DTM, which represents other morphological elements, or define a DEM as a rectangular grid and a DTM as a three-dimensional model. Most of the data providers use the term DEM as a generic term for DTMs. All datasets which are captured with satellites, airplanes or other flying platforms are DSMs, it is possible to compute a DTM from high resolution DSM datasets with complex algorithms. In the following, the term DEM is used as a generic term for DTMs. A DEM can be represented as a vector-based triangular irregular network; the TIN DEM dataset is referred to as a primary DEM, whereas the Raster DEM is referred to as a secondary DEM. The DEM could be acquired through techniques such as photogrammetry, lidar, IfSAR, land surveying, etc.. DEMs are built using data collected using remote sensing techniques, but they may be built from land surveying; the digital elevation model itself consists of a matrix of numbers, but the data from a DEM is rendered in visual form to make it understandable to humans.
This visualization may be in the form of a contoured topographic map, or could use shading and false color assignment to render elevations as colors. Visualizations are sometime done as oblique views, reconstructing a synthetic visual image of the terrain as it would appear looking down at an angle. In these oblique visualizations, elevations are sometimes scaled using "vertical exaggeration" in order to make subtle elevation differences more noticeable; some scientists, object to vertical exaggeration as misleading the viewer about the true landscape. Mappers may prepare digital elevation models in a number of ways, but they use remote sensing rather than direct survey data. Older methods of generating DEMs involve interpolating digital contour maps that may have been produced by direct survey of the land surface; this method is still used in mountain areas. Note that contour line data or any other sampled elevation datasets are not DEMs, but may be considered digital terrain models. A DEM implies.
One powerful technique for generating digital elevation models is interferometric synthetic aperture radar where two passes of a radar satellite, or a single pass if the satellite is equipped with two antennas, collect sufficient data to generate a digital elevation map tens of kilometers on a side with a resolution of around ten meters. Other kinds of stereoscopic pairs can be employed using the digital image correlation method, where two optical images are acquired with different angles taken from the same pass of an airplane or an Earth Observation Satellite; the SPOT 1 satellite provided the first usable elevation data for a sizeable portion of the planet's landmass, using two-pass stereoscopic correlation. Further data were provided by the European Remote-Sensing Satellite using the same method, the Shuttle Radar Topography Mission using single-pass SAR and the Advanced Spaceborne Thermal Emission and Reflection Radiometer instrumentation on the Terra satellite using double-pass stereo pairs.
The HRS instrument on SPOT 5 has acquired over 100 million square kilometers of stereo pairs. A tool of increasing value in planetary science has been use of orbital altimetry used to make digital elevation map of planets. A primary tool for this is Laser altimetry. Planetary digital elevation maps made using laser altimetry include the Mars Orbiter Laser Altimeter mapping of Mars, the Lunar Orbital Laser Altimeter and Lunar Altimeter mapping of the Moon, the Mercury Laser Altimeter mapping of Mercury. Lidar Radar Stereo photogrammetry from aerial surveys Structure from motion / Multi-view stereo applied to aerial photography Block adjustment from optical satellite imagery Interferometry from radar data Real Time Kinematic GPS Topographic maps Theodolite or total station Doppler radar Focus variation Inertial surveys Surveying and
A simulation is an approximate imitation of the operation of a process or system. This model is a well-defined description of the simulated subject, represents its key characteristics, such as its behaviour and abstract or physical properties; the model represents the system itself. Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, training and video games. Computer experiments are used to study simulation models. Simulation is used with scientific modelling of natural systems or human systems to gain insight into their functioning, as in economics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may not exist. Key issues in simulation include the acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, fidelity and validity of the simulation outcomes.
Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement and development in simulations technology or practice in the field of computer simulation. Simulations used in different fields developed independently, but 20th-century studies of systems theory and cybernetics combined with spreading use of computers across all those fields have led to some unification and a more systematic view of the concept. Physical simulation refers to simulation in which physical objects are substituted for the real thing; these physical objects are chosen because they are smaller or cheaper than the actual object or system. Interactive simulation is a special kind of physical simulation referred to as a human in the loop simulation, in which physical simulations include human operators, such as in a flight simulator, sailing simulator, or a driving simulator. Continuous simulation is a simulation where time evolves continuously based on numerical integration of Differential Equations.
Discrete Event Simulation is a simulation where time evolves along events that represent critical moments, while the values of the variables are not relevant between two of them or result trivial to be computed in case of necessityStochastic Simulation is a simulation where some variable or process is regulated by stochastic factors and estimated based on Monte Carlo techniques using pseudo-random numbers, so replicated runs from same boundary conditions are expected to produce different results within a specific confidence band Deterministic Simulation is a simulation where the variable is regulated by deterministic algorithms, so replicated runs from same boundary conditions produce always identical results. Hybrid Simulation corresponds to a mix between Continuous and Discrete Event Simulation and results in integrating numerically the differential equations between two sequential events to reduce the number of discontinuities Stand Alone Simulation is a Simulation running on a single workstation by itself.
Distributed Simulation is operating over distributed computers in order to guarantee access from/to different resources. Modeling & Simulation as a Service where Simulation is accessed as a Service over the web. Modeling, interoperable Simulation and Serious Games where Serious Games Approaches are integrated with Interoperable Simulation. Simulation Fidelity is used to describe the accuracy of a simulation and how it imitates the real-life counterpart. Fidelity is broadly classified as 1 of 3 categories: low and high. Specific descriptions of fidelity levels are subject to interpretation but the following generalization can be made: Low – the minimum simulation required for a system to respond to accept inputs and provide outputs Medium – responds automatically to stimuli, with limited accuracy High – nearly indistinguishable or as close as possible to the real systemHuman in the loop simulations can include a computer simulation as a so-called synthetic environment. Simulation in failure analysis refers to simulation in which we create environment/conditions to identify the cause of equipment failure.
This was the fastest method to identify the failure cause. A computer simulation is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables in the simulation, predictions may be made about the behaviour of the system, it is a tool to investigate the behaviour of the system under study. Computer simulation has become a useful part of modeling many natural systems in physics and biology, human systems in economics and social science as well as in engineering to gain insight into the operation of those systems
Scientific visualization is an interdisciplinary branch of science concerned with the visualization of scientific phenomena. It is considered a subset of computer graphics, a branch of computer science; the purpose of scientific visualization is to graphically illustrate scientific data to enable scientists to understand and glean insight from their data. One of the earliest examples of three-dimensional scientific visualisation was Maxwell's thermodynamic surface, sculpted in clay in 1874 by James Clerk Maxwell; this prefigured modern scientific visualization techniques. Notable early two-dimensional examples include the flow map of Napoleon’s March on Moscow produced by Charles Joseph Minard in 1869. Scientific visualization using computer graphics gained in popularity as graphics matured. Primary applications were scalar fields and vector fields from computer simulations and measured data; the primary methods for visualizing two-dimensional scalar fields are color mapping and drawing contour lines.
2D vector fields line integral convolution methods. 2D tensor fields are resolved to a vector field by using one of the two eigenvectors to represent the tensor each point in the field and visualized using vector field visualization methods. For 3D scalar fields the primary methods are isosurfaces. Methods for visualizing vector fields include glyphs such as arrows and streaklines, particle tracing, line integral convolution and topological methods. Visualization techniques such as hyperstreamlines were developed to visualize 2D and 3D tensor fields. Computer animation is the art and science of creating moving images via the use of computers, it is becoming more common to be created by means of 3D computer graphics, though 2D computer graphics are still used for stylistic, low bandwidth, faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film, it is referred to as CGI when used in films. Applications include medical animation, most utilized as an instructional tool for medical professionals or their patients.
Computer simulation is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modelling of many natural systems in physics, computational physics and biology; the simultaneous visualization and simulation of a system is called visulation. Computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for months; the scale of events being simulated by computer simulations has far exceeded anything possible using the traditional paper-and-pencil mathematical modeling: over 10 years ago, a desert-battle simulation, of one force invading another, involved the modeling of 66,239 tanks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computing Modernization Program. Information visualization is the study of "the visual representation of large-scale collections of non-numerical information, such as files and lines of code in software systems and bibliographic databases, networks of relations on the internet, so forth".
Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways. Visual representations and interaction techniques take advantage of the human eye’s broad bandwidth pathway into the mind to allow users to see and understand large amounts of information at once; the key difference between scientific visualization and information visualization is that information visualization is applied to data, not generated by scientific inquiry. Some examples are graphical representations of data for business, government and social media. Interface technology and perception shows how new interfaces and a better understanding of underlying perceptual issues create new opportunities for the scientific visualization community. Rendering is the process of generating an image by means of computer programs; the model is a description of three-dimensional objects in a defined language or data structure. It would contain geometry, texture and shading information.
The image is a digital image or raster graphics image. The term may be by analogy with an "artist's rendering" of a scene.'Rendering' is used to describe the process of calculating effects in a video editing file to produce final video output. Important rendering techniques are: Scanline rendering and rasterisation A high-level representation of an image contains elements in a different domain from pixels; these elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface and buttons might be the primitives. In 3D rendering and polygons in space might be primitives. Ray casting Ray casting is used for realtime