A transmission medium is a material substance that can propagate energy waves. For example, the transmission medium for sounds is a gas, but solids and liquids may act as a transmission medium for sound; the absence of a material medium in vacuum may constitute a transmission medium for electromagnetic waves such as light and radio waves. While material substance is not required for electromagnetic waves to propagate, such waves are affected by the transmission media they pass through, for instance by absorption or by reflection or refraction at the interfaces between media; the term transmission medium refers to a technical device that employs the material substance to transmit or guide waves. Thus, an optical fiber or a copper cable is a transmission medium. Not only this but is able to guide the transmission of networks. A transmission medium can be classified as a: Linear medium, if different waves at any particular point in the medium can be superposed. Electromagnetic radiation can be transmitted through an optical medium, such as optical fiber, or through twisted pair wires, coaxial cable, or dielectric-slab waveguides.
It may pass through any physical material, transparent to the specific wavelength, such as water, glass, or concrete. Sound is, by definition, the vibration of matter, so it requires a physical medium for transmission, as do other kinds of mechanical waves and heat energy. Science incorporated various aether theories to explain the transmission medium. However, it is now known that electromagnetic waves do not require a physical transmission medium, so can travel through the "vacuum" of free space. Regions of the insulative vacuum can become conductive for electrical conduction through the presence of free electrons, holes, or ions. A physical medium in data communications is the transmission path over. Many transmission media are used as communications channel. For telecommunications purposes in the United States, Federal Standard 1037C, transmission media are classified as one of the following: Guided —waves are guided along a solid medium such as a transmission line. Wireless —transmission and reception are achieved by means of an antenna.
One of the most common physical medias used in networking is copper wire. Copper wire to carry signals to long distances using low amounts of power; the unshielded twisted pair is eight strands of copper wire, organized into four pairs. Another example of a physical medium is optical fiber, which has emerged as the most used transmission medium for long-distance communications. Optical fiber is a thin strand of glass. Four major factors favor optical fiber over copper- data rates, distance and costs. Optical fiber can carry huge amounts of data compared to copper, it can be run for hundreds of miles without the need for signal repeaters, in turn, reducing maintenance costs and improving the reliability of the communication system because repeaters are a common source of network failures. Glass is lighter than copper allowing for less need for specialized heavy-lifting equipment when installing long-distance optical fiber. Optical fiber for indoor applications cost a dollar a foot, the same as copper.
Multimode and single mode are two types of used optical fiber. Multimode fiber uses LEDs as the light source and can carry signals over shorter distances, about 2 kilometers. Single mode can carry signals over distances of tens of miles. Wireless media may carry surface waves or skywaves, either longitudinally or transversely, are so classified. In both communications, communication is in the form of electromagnetic waves. With guided transmission media, the waves are guided along a physical path. Unguided transmission media are methods that allow the transmission of data without the use of physical means to define the path it takes. Examples of this include radio or infrared. Unguided media do not guide them; the term direct link is used to refer to the transmission path between two devices in which signals propagate directly from transmitters to receivers with no intermediate devices, other than amplifiers or repeaters used to increase signal strength. This term can apply to unguided media. A transmission may be simplex, full-duplex.
In simplex transmission, signals are transmitted in only one direction. In the half-duplex operation, both stations may only one at a time. In full duplex operation, both stations may transmit simultaneously. In the latter case, the medium is carrying signals in both directions at same time. There are two types of transmission media: guided and unguided. Guided Media: Unshielded Twisted Pair Shielded Twisted Pair Coaxial Cable Optical Fiber hubUnguided Media: Transmission media looking at analysis of using them unguided transmission media is data signals that flow through the air, they are not bound to a channel to follow. Following are unguided media used for data communication: Radio Transmission Microwave Transmission and reception of data is performed in four steps; the data is coded as binary numbers at the sender end A carrie
In everyday use and in kinematics, the speed of an object is the magnitude of its velocity. The average speed of an object in an interval of time is the distance travelled by the object divided by the duration of the interval. Speed has the dimensions of distance divided by time; the SI unit of speed is the metre per second, but the most common unit of speed in everyday usage is the kilometre per hour or, in the US and the UK, miles per hour. For air and marine travel the knot is used; the fastest possible speed at which energy or information can travel, according to special relativity, is the speed of light in a vacuum c = 299792458 metres per second. Matter can not quite reach the speed of light. In relativity physics, the concept of rapidity replaces the classical idea of speed. Italian physicist Galileo Galilei is credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered per unit of time. In equation form, v = d t, where v is speed, d is distance, t is time.
A cyclist who covers 30 metres in a time of 2 seconds, for example, has a speed of 15 metres per second. Objects in motion have variations in speed. Speed at some instant, or assumed constant during a short period of time, is called instantaneous speed. By looking at a speedometer, one can read the instantaneous speed of a car at any instant. A car travelling at 50 km/h goes for less than one hour at a constant speed, but if it did go at that speed for a full hour, it would travel 50 km. If the vehicle continued at that speed for half an hour, it would cover half that distance. If it continued for only one minute, it would cover about 833 m. In mathematical terms, the instantaneous speed v is defined as the magnitude of the instantaneous velocity v, that is, the derivative of the position r with respect to time: v = | v | = | r ˙ | = | d r d t |. If s is the length of the path travelled until time t, the speed equals the time derivative of s: v = d s d t. In the special case where the velocity is constant, this can be simplified to v = s / t.
The average speed over a finite time interval is the total distance travelled divided by the time duration. Different from instantaneous speed, average speed is defined as the total distance covered divided by the time interval. For example, if a distance of 80 kilometres is driven in 1 hour, the average speed is 80 kilometres per hour. If 320 kilometres are travelled in 4 hours, the average speed is 80 kilometres per hour; when a distance in kilometres is divided by a time in hours, the result is in kilometres per hour. Average speed does not describe the speed variations that may have taken place during shorter time intervals, so average speed is quite different from a value of instantaneous speed. If the average speed and the time of travel are known, the distance travelled can be calculated by rearranging the definition to d = v ¯ t. Using this equation for an average speed of 80 kilometres per hour on a 4-hour trip, the distance covered is found to be 320 kilometres. Expressed in graphical language, the slope of a tangent line at any point of a distance-time graph is the instantaneous speed at this point, while the slope of a chord line of the same graph is the average speed during the time interval covered by the chord.
Average speed of an object is Vav = s÷t Linear speed is the distance travelled per unit of time, while tangential speed is the linear speed of something moving along a circular path. A point on the outside edge of a merry-go-round or turntable travels a greater distance in one complete rotation than a point nearer the center. Travelling a greater distance in the same time means a greater speed, so linear speed is greater on the outer edge of a rotating object than it is closer to the axis; this speed along a circular path is known as tangential speed because the direction of motion is tangent to the circumference of the circle. For circular motion, the terms linear speed and tangential speed are used interchangeably, both use units of m/s, km/h, others. Rotational speed involves the number of revolutions per unit of time. All parts of a rigid merry-
Oscillation is the repetitive variation in time, of some measure about a central value or between two or more different states. The term vibration is used to describe mechanical oscillation. Familiar examples of oscillation include a swinging pendulum and alternating current. Oscillations occur not only in mechanical systems but in dynamic systems in every area of science: for example the beating of the human heart, business cycles in economics, predator–prey population cycles in ecology, geothermal geysers in geology, vibration of strings in guitar and other string instruments, periodic firing of nerve cells in the brain, the periodic swelling of Cepheid variable stars in astronomy; the simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air ice surface; the system is in an equilibrium state. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium.
However, in moving the mass back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position, establishing a new restoring force in the opposite sense. If a constant force such as gravity is added to the system, the point of equilibrium is shifted; the time taken for an oscillation to occur is referred to as the oscillatory period. The systems where the restoring force on a body is directly proportional to its displacement, such as the dynamics of the spring-mass system, are described mathematically by the simple harmonic oscillator and the regular periodic motion is known as simple harmonic motion. In the spring-mass system, oscillations occur because, at the static equilibrium displacement, the mass has kinetic energy, converted into potential energy stored in the spring at the extremes of its path; the spring-mass system illustrates some common features of oscillation, namely the existence of an equilibrium and the presence of a restoring force which grows stronger the further the system deviates from equilibrium.
All real-world oscillator systems are thermodynamically irreversible. This means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment; this is called damping. Thus, oscillations tend to decay with time unless there is some net source of energy into the system; the simplest description of this decay process can be illustrated by oscillation decay of the harmonic oscillator. In addition, an oscillating system may be subject to some external force, as when an AC circuit is connected to an outside power source. In this case the oscillation is said to be driven; some systems can be excited by energy transfer from the environment. This transfer occurs where systems are embedded in some fluid flow. For example, the phenomenon of flutter in aerodynamics occurs when an arbitrarily small displacement of an aircraft wing results in an increase in the angle of attack of the wing on the air flow and a consequential increase in lift coefficient, leading to a still greater displacement.
At sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation. The harmonic oscillator and the systems it models have a single degree of freedom. More complicated systems have more degrees of freedom, for example three springs. In such cases, the behavior of each variable influences that of the others; this leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks mounted on a common wall will tend to synchronise; this phenomenon was first observed by Christiaan Huygens in 1665. The apparent motions of the compound oscillations appears complicated but a more economic, computationally simpler and conceptually deeper description is given by resolving the motion into normal modes. More special cases are the coupled oscillators where energy alternates between two forms of oscillation. Well-known is the Wilberforce pendulum, where the oscillation alternates between an elongation of a vertical spring and the rotation of an object at the end of that spring.
As the number of degrees of freedom becomes arbitrarily large, a system approaches continuity. Such systems have an infinite number of normal modes and their oscillations occur in the form of waves that can characteristically propagate; the mathematics of oscillation deals with the quantification of the amount that a sequence or function tends to move between extremes. There are several related notions: oscillation of a sequence of real numbers, oscillation of a real valued function at a point, oscillation of a function on an interval. Crystal oscillator Neutron stars Cyclic Model Neutral particle oscillation, e.g. neutrino oscillations Quantum harmonic oscillator Cellular Automata oscillator Media related to Oscillation at Wikimedia Commons Vibrations – a chapter from an online textbook
In fluid dynamics, wind waves, or wind-generated waves, are surface waves that occur on the free surface of bodies of water. They result from the wind blowing over an area of fluid surface. Waves in the oceans can travel thousands of miles before reaching land. Wind waves on Earth range in size to waves over 100 ft high; when directly generated and affected by local waters, a wind wave system is called a wind sea. After the wind ceases to blow, wind waves are called swells. More a swell consists of wind-generated waves that are not affected by the local wind at that time, they have been generated some time ago. Wind waves in the ocean are called ocean surface waves. Wind waves have a certain amount of randomness: subsequent waves differ in height and shape with limited predictability, they can be described as a stochastic process, in combination with the physics governing their generation, growth and decay—as well as governing the interdependence between flow quantities such as: the water surface movements, flow velocities and water pressure.
The key statistics of wind waves in evolving sea states can be predicted with wind wave models. Although waves are considered in the water seas of Earth, the hydrocarbon seas of Titan may have wind-driven waves; the great majority of large breakers seen at a beach result from distant winds. Five factors influence the formation of the flow structures in wind waves: Wind speed or strength relative to wave speed—the wind must be moving faster than the wave crest for energy transfer The uninterrupted distance of open water over which the wind blows without significant change in direction Width of area affected by fetch Wind duration — the time for which the wind has blown over the water. Water depthAll of these factors work together to determine the size of the water waves and the structure of the flow within them; the main dimensions associated with waves are: Wave height Wave length Wave period Wave propagation directionA developed sea has the maximum wave size theoretically possible for a wind of a specific strength and fetch.
Further exposure to that specific wind could only cause a dissipation of energy due to the breaking of wave tops and formation of "whitecaps". Waves in a given area have a range of heights. For weather reporting and for scientific analysis of wind wave statistics, their characteristic height over a period of time is expressed as significant wave height; this figure represents an average height of the highest one-third of the waves in a given time period, or in a specific wave or storm system. The significant wave height is the value a "trained observer" would estimate from visual observation of a sea state. Given the variability of wave height, the largest individual waves are to be somewhat less than twice the reported significant wave height for a particular day or storm. Wave formation on an flat water surface by wind is started by a random distribution of normal pressure of turbulent wind flow over the water; this pressure fluctuation produces normal and tangential stresses in the surface water, which generates waves.
It is assumed that: The water is at rest. The water is not viscous; the water is irrotational. There is a random distribution of normal pressure to the water surface from the turbulent wind. Correlations between air and water motions are neglected; the second mechanism involves wind shear forces on the water surface. John W. Miles suggested a surface wave generation mechanism, initiated by turbulent wind shear flows based on the inviscid Orr-Sommerfeld equation in 1957, he found the energy transfer from wind to water surface is proportional to the curvature of the velocity profile of the wind at the point where the mean wind speed is equal to the wave speed. Since the wind speed profile is logarithmic to the water surface, the curvature has a negative sign at this point; this relation shows the wind flow transferring its kinetic energy to the water surface at their interface. Assumptions: two-dimensional parallel shear flow incompressible, inviscid water and wind irrotational water slope of the displacement of the water surface is smallGenerally these wave formation mechanisms occur together on the water surface and produce developed waves.
For example, if we assume a flat sea surface, a sudden wind flow blows across the sea surface, the physical wave generation process follows the sequence: Turbulent wind forms random pressure fluctuations at the sea surface. Ripples with wavelengths in the order of a few centimetres are generated by the pressure fluctuations; the winds keep acting on the rippled sea surface causing the waves to become larger. As the waves grow, the pressure differences get larger causing the growth rate to increase; the shear instability expedites the wave growth exponentially. The interactions between the waves on the surface generate longer waves and the interaction will transfer wave energy from the shorter waves generated by the Miles mechanism to the waves which have lower frequencies than the frequency at the peak wave magnitudes finally the waves will be faster than the cross wind speed. Three different types of wind waves develop over time: Capillary waves
Dispersion (water waves)
In fluid dynamics, dispersion of water waves refers to frequency dispersion, which means that waves of different wavelengths travel at different phase speeds. Water waves, in this context, are waves propagating on the water surface, with gravity and surface tension as the restoring forces; as a result, water with a free surface is considered to be a dispersive medium. For a certain water depth, surface gravity waves – i.e. waves occurring at the air–water interface and gravity as the only force restoring it to flatness – propagate faster with increasing wavelength. On the other hand, for a given wavelength, gravity waves in deeper water have a larger phase speed than in shallower water. In contrast with the behavior of gravity waves, capillary waves propagate faster for shorter wavelengths. Besides frequency dispersion, water waves exhibit amplitude dispersion; this is a nonlinear effect, by which waves of larger amplitude have a different phase speed from small-amplitude waves. This section is about frequency dispersion for waves on a fluid layer forced by gravity, according to linear theory.
For surface tension effects on frequency dispersion, see surface tension effects in Airy wave theory and capillary wave. The simplest propagating wave of unchanging form is a sine wave. A sine wave with water surface elevation η is given by: η = a sin , where a is the amplitude and θ = θ is the phase function, depending on the horizontal position and time: θ = 2 π = k x − ω t, with k = 2 π λ and ω = 2 π T, where: λ is the wavelength, T is the period, k is the wavenumber and ω is the angular frequency. Characteristic phases of a water wave are: the upward zero-crossing at θ = 0, the wave crest at θ = ½ π, the downward zero-crossing at θ = π and the wave trough at θ = 1½ π. A certain phase repeats itself after an integer m multiple of 2π: sin = sin. Essential for water waves, other wave phenomena in physics, is that free propagating waves of non-zero amplitude only exist when the angular frequency ω and wavenumber k satisfy a functional relationship: the frequency dispersion relation ω 2 = Ω 2.
The dispersion relation has two solutions: ω = +Ω and ω = −Ω, corresponding to waves travelling in the positive or negative x–direction. The dispersion relation will in general depend on several other parameters in addition to the wavenumber k. For gravity waves, according to linear theory, these are the acceleration by gravity g and the water depth h; the dispersion relation for these waves is: an implicit equation with tanh denoting the hyperbolic tangent function. An initial wave phase θ = θ0 propagates as a function of time, its subsequent position is given by: x = λ T t + λ 2 π θ 0 = ω k t + θ 0 k. This shows that the phase moves with the velocity: c p = λ T = ω k = Ω k, called the phase velocity. A sinusoidal wave, of small surface-elevation amplitude and with a constant wavelength, propagates with the phase velocity called celerity or phase speed. While the phase velocity is a vector and has an associated direction, celerity or phase speed refer only to the magnitude of the phase velocity.
According to linear theory for waves forced by gravity, the phase speed depends on the wavelength and the water depth. For a fixed water depth, long waves propagate faster than shorter waves. In the left figure, it can be seen that shallow water waves, with wavelengths λ much larger than the water depth h, travel with the phase velocity c p = g h, with g the acceleration by gravity and cp the phase speed. Since this shallow-water phase speed is independent of the wavelength, shallow water waves do not have frequency dispersion. Using another normalization for the same frequency dispersion relation, the figure on the right shows that for a fixed wavelength λ the phase speed cp increases with increasing water depth. Until, in deep water with water depth h larger than half t
In physics, electromagnetic radiation refers to the waves of the electromagnetic field, propagating through space, carrying electromagnetic radiant energy. It includes radio waves, infrared, ultraviolet, X-rays, gamma rays. Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields that propagate at the speed of light, which, in a vacuum, is denoted c. In homogeneous, isotropic media, the oscillations of the two fields are perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave; the wavefront of electromagnetic waves emitted from a point source is a sphere. The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength these are: radio waves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays.
Electromagnetic waves are emitted by electrically charged particles undergoing acceleration, these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them electromagnetic induction and electrostatic induction phenomena. In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic force, responsible for all electromagnetic interactions.
Quantum electrodynamics is the theory of. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation; the energy of an individual photon is greater for photons of higher frequency. This relationship is given by Planck's equation E = hν, where E is the energy per photon, ν is the frequency of the photon, h is Planck's constant. A single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light; the effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of visible or lower frequencies is called non-ionizing radiation, because its photons do not individually have enough energy to ionize atoms or molecules or break chemical bonds; the effects of these radiations on chemical systems and living tissue are caused by heating effects from the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are called ionizing radiation, since individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds.
These radiations have the ability to cause chemical reactions and damage living cells beyond that resulting from simple heating, can be a health hazard. James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry; because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves. According to Maxwell's equations, a spatially varying electric field is always associated with a magnetic field that changes over time. A spatially varying magnetic field is associated with specific changes over time in the electric field. In an electromagnetic wave, the changes in the electric field are always accompanied by a wave in the magnetic field in one direction, vice versa; this relationship between the two occurs without either type of field causing the other.
In fact, magnetic fields can be viewed as electric fields in another frame of reference, electric fields can be viewed as magnetic fields in another frame of reference, but they have equal significance as physics is the same in all frames of reference, so the close relationship between space and time changes here is more than an analogy. Together, these fields form a propagating electromagnetic wave, which moves out into space and need never again interact with the source; the distant EM field formed in this way by the acceleration of a charge carries energy with it that "radiates" away through space, hence the term. Maxwell's equations established that some charges and currents produce a local type of electromagnetic field near them that does not have the behaviour of EMR. Currents directly produce a magnetic field, but it is of a magnetic dipole type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential produce an electric dipole type electric
Seismic tomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, the seismograph array coverage; the data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core and plate tectonic processes. Tomography is solved as an inverse problem. Seismic travel time data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but the compositional layering, tectonic structure, thermal variations reflect and refract seismic waves.
The location and magnitude of these variations can be calculated by the inversion process, although solutions to tomographic inversions are non-unique. Seismic tomography is similar to medical x-ray computed tomography in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of traveltime difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the earth and potential uncertainty in the location of the earthquake hypocenter. CT scans use a known source. Seismic tomography requires large datasets of seismograms and well-located earthquake or explosion sources; these became more available in the 1960s with the expansion of global seismic networks and in the 1970s when digital seismograph data archives were established. These developments occurred concurrently with advancements in computing power that were required to solve inverse problems and generate theoretical seismograms for model testing.
In 1977, P-wave delay times were used to create the first seismic array-scale 2D map of seismic velocity. In the same year, P-wave data were used to determine 150 spherical harmonic coefficients for velocity anomalies in the mantle; the first model using iterative techniques, required when there are a large numbers of unknowns, was done in 1984. This built upon the first radially anisotropic model of the Earth, which provided the required initial reference frame to compare tomographic models to for iteration. Initial models had resolution of ~3000 to 5000 km, as compared to the few hundred kilometer resolution of current models. Seismic tomographic models improve with advancements in expansion of seismic networks. Recent models of global body waves used over 107 traveltimes to model 105 to 106 unknowns. Seismic tomography uses seismic records to create 2D and 3D images of subsurface anomalies by solving large inverse problems such that generate models consistent with observed data. Various methods are used to resolve anomalies in the crust and lithosphere, shallow mantle, whole mantle, core based on the availability of data and types of seismic waves that penetrate the region at a suitable wavelength for feature resolution.
The accuracy of the model is limited by availability and accuracy of seismic data, wave type utilized, assumptions made in the model. P-wave data are used in most local models and global models in areas with sufficient earthquake and seismograph density. S- and surface wave data are used in global models when this coverage is not sufficient, such as in ocean basins and away from subduction zones. First-arrival times are the most used, but models utilizing reflected and refracted phases are used in more complex models, such as those imaging the core. Differential traveltimes between wave phases or types are used. Local tomographic models are based on a temporary seismic array targeting specific areas, unless in a seismically active region with extensive permanent network coverage; these allow for the imaging of upper mantle. Diffraction and wave equation tomography use the full waveform, rather than just the first arrival times; the inversion of amplitude and phases of all arrivals provide more detailed density information than transmission traveltime alone.
Despite the theoretical appeal, these methods are not employed because of the computing expense and difficult inversions. Reflection tomography originated with exploration geophysics, it uses an artificial source to resolve small-scale features at crustal depths. Wide-angle tomography is similar, but with a wide source to receiver offset; this allows for the detection of seismic waves refracted from sub-crustal depths and can determine continental architecture and details of plate margins. These two methods are used together. Local earthquake tomography is used in seismically active regions with sufficient seismometer coverage. Given the proximity between source and receivers, a precise earthquake focus location must be known; this requires the simultaneous iteration of both structure and focus locations in model calculations. Teleseismic tomography uses waves from distant earthquakes that deflect upwards to a local seismic array; the models can reach depths similar to the array aperture to depths for imaging the crust and lithosphere.
The waves travel near 30 ° from vertical. Regional to global scale tomographic models are based on long wavelengths. Various models have better agreement with each other than local models due to the large feature size they image, such as subducted slabs and superplumes; the trade off from w