In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number representing the magnitude of the voltage or current; the digital output is a two's complement binary number, proportional to the input, but there are other possibilities. There are several ADC architectures. Due to the complexity and the need for matched components, all but the most specialized ADCs are implemented as integrated circuits. A digital-to-analog converter performs the reverse function. An ADC converts a continuous-time and continuous-amplitude analog signal to a discrete-time and discrete-amplitude digital signal; the conversion involves quantization of the input, so it introduces a small amount of error or noise. Furthermore, instead of continuously performing the conversion, an ADC does the conversion periodically, sampling the input, limiting the allowable bandwidth of the input signal.
The performance of an ADC is characterized by its bandwidth and signal-to-noise ratio. The bandwidth of an ADC is characterized by its sampling rate; the SNR of an ADC is influenced by many factors, including the resolution and accuracy, aliasing and jitter. The SNR of an ADC is summarized in terms of its effective number of bits, the number of bits of each measure it returns that are on average not noise. An ideal ADC has an ENOB equal to its resolution. ADCs are required SNR of the signal to be digitized. If an ADC operates at a sampling rate greater than twice the bandwidth of the signal per the Nyquist–Shannon sampling theorem, perfect reconstruction is possible; the presence of quantization error limits the SNR of an ideal ADC. However, if the SNR of the ADC exceeds that of the input signal, its effects may be neglected resulting in an perfect digital representation of the analog input signal; the resolution of the converter indicates the number of discrete values it can produce over the range of analog values.
The resolution determines the magnitude of the quantization error and therefore determines the maximum possible average signal-to-noise ratio for an ideal ADC without the use of oversampling. The values are stored electronically in binary form, so the resolution is expressed as the audio bit depth. In consequence, the number of discrete values available is assumed to be a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels; the values can represent the ranges depending on the application. Resolution can be defined electrically, expressed in volts; the change in voltage required to guarantee a change in the output code level is called the least significant bit voltage. The resolution Q of the ADC is equal to the LSB voltage; the voltage resolution of an ADC is equal to its overall voltage measurement range divided by the number of intervals: Q = E F S R 2 M, where M is the ADC's resolution in bits and EFSR is the full scale voltage range.
EFSR is given by E F S R = V R e f H i − V R e f L o w, where VRefHi and VRefLow are the upper and lower extremes of the voltages that can be coded. The number of voltage intervals is given by N = 2 M, where M is the ADC's resolution in bits; that is, one voltage interval is assigned in between two consecutive code levels. Example: Coding scheme as in figure 1 Full scale measurement range = 0 to 1 volt ADC resolution is 3 bits: 23 = 8 quantization levels ADC voltage resolution, Q = 1 V / 8 = 0.125 V. In many cases, the useful resolution of a converter is limited by the signal-to-noise ratio and other errors in the overall system expressed as an ENOB. Quantization error is introduced by quantization in an ideal ADC, it is a rounding error between the analog input voltage to the output digitized value. The error is signal-dependent. In an ideal ADC, where the quantization error is uniformly distributed between −1/2 LSB and +1/2 LSB, the signal has a uniform distribution covering all quantization levels, the Signal-to-quantization-noise ratio is given by S Q N R = 20 log 10 ≈ 6.02 ⋅ Q d B Where Q is the number of quantization bits.
For example, for a 16-bit ADC, the quantization error is 96.3 dB below the maximum level. Quantization error is distributed from DC to the Nyquist frequency if part of the ADC's bandwidth is not used, as is the case
Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object and thus in contrast to on-site observation the Earth. Remote sensing is used in numerous fields, including geography, land surveying and most Earth Science disciplines. In current usage, the term "remote sensing" refers to the use of satellite- or aircraft-based sensor technologies to detect and classify objects on Earth, including on the surface and in the atmosphere and oceans, based on propagated signals, it may be split into "passive" remote sensing. Passive sensors gather radiation, emitted or reflected by the object or surrounding areas. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, charge-coupled devices, radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor detects and measures the radiation, reflected or backscattered from the target.
RADAR and LiDAR are examples of active remote sensing where the time delay between emission and return is measured, establishing the location and direction of an object. Remote sensing makes it possible to collect data of inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, glacial features in Arctic and Antarctic regions, depth sounding of coastal and ocean depths. Military collection during the Cold War made use of stand-off collection of data about dangerous border areas. Remote sensing replaces costly and slow data collection on the ground, ensuring in the process that areas or objects are not disturbed. Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum, which in conjunction with larger scale aerial or ground-based sensing and analysis, provides researchers with enough information to monitor trends such as El Niño and other natural long and short term phenomena. Other uses include different areas of the earth sciences such as natural resource management, agricultural fields such as land usage and conservation, national security and overhead, ground-based and stand-off collection on border areas.
The basis for multispectral collection and analysis is that of examined areas or objects that reflect or emit radiation that stand out from surrounding areas. For a summary of major remote sensing satellite systems see the overview table. Conventional radar is associated with aerial traffic control, early warning, certain large scale meteorological data. Doppler radar is used by local law enforcements’ monitoring of speed limits and in enhanced meteorological collection such as wind speed and direction within weather systems in addition to precipitation location and intensity. Other types of active collection includes plasmas in the ionosphere. Interferometric synthetic aperture radar is used to produce precise digital elevation models of large scale terrain. Laser and radar altimeters on satellites have provided a wide range of data. By measuring the bulges of water caused by gravity, they map features on the seafloor to a resolution of a mile or so. By measuring the height and wavelength of ocean waves, the altimeters measure wind speeds and direction, surface ocean currents and directions.
Ultrasound and radar tide gauges measure sea level and wave direction in coastal and offshore tide gauges. Light detection and ranging is well known in examples of weapon ranging, laser illuminated homing of projectiles. LIDAR is used to detect and measure the concentration of various chemicals in the atmosphere, while airborne LIDAR can be used to measure heights of objects and features on the ground more than with radar technology. Vegetation remote sensing is a principal application of LIDAR. Radiometers and photometers are the most common instrument in use, collecting reflected and emitted radiation in a wide range of frequencies; the most common are visible and infrared sensors, followed by microwave, gamma ray and ultraviolet. They may be used to detect the emission spectra of various chemicals, providing data on chemical concentrations in the atmosphere. Spectropolarimetric Imaging has been reported to be useful for target tracking purposes by researchers at the U. S. Army Research Laboratory.
They determined that manmade items possess polarimetric signatures that are not found in natural objects. These conclusions were drawn from the imaging of military trucks, like the Humvee, trailers with their acousto-optic tunable filter dual hyperspectral and spectropolarimetric VNIR Spectropolarimetric Imager. Stereographic pairs of aerial photographs have been used to make topographic maps by imagery and terrain analysts in trafficability and highway departments for potential routes, in addition to modelling terrestrial habitat features. Simultaneous multi-spectral platforms such as Landsat have been in use since the 1970s; these thematic mappers take images in multiple wavelengths of electro-magnetic radiation and are found on Earth observation satellites, including the Landsat program or the IKONOS satellite. Maps of land cover and land use from thematic mapping can be used to prospect for minerals, detect or mo
The Doppler effect is the change in frequency or wavelength of a wave in relation to an observer, moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842. A common example of Doppler shift is the change of pitch heard when a vehicle sounding a horn approaches and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, lower during the recession; the reason for the Doppler effect is that when the source of the waves is moving towards the observer, each successive wave crest is emitted from a position closer to the observer than the crest of the previous wave. Therefore, each wave takes less time to reach the observer than the previous wave. Hence, the time between the arrival of successive wave crests at the observer is reduced, causing an increase in the frequency. While they are traveling, the distance between successive wave fronts is reduced, so the waves "bunch together".
Conversely, if the source of waves is moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased, reducing the frequency. The distance between successive wave fronts is increased, so the waves "spread out". For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted; the total Doppler effect may therefore result from motion of the source, motion of the observer, or motion of the medium. Each of these effects is analyzed separately. For waves which do not require a medium, such as light or gravity in general relativity, only the relative difference in velocity between the observer and the source needs to be considered. Doppler first proposed this effect in 1842 in his treatise "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels"; the hypothesis was tested for sound waves by Buys Ballot in 1845.
He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848. In Britain, John Scott Russell made an experimental study of the Doppler effect. In classical physics, where the speeds of source and the receiver relative to the medium are lower than the velocity of waves in the medium, the relationship between observed frequency f and emitted frequency f 0 is given by: f = f 0 where c is the velocity of waves in the medium; the frequency is decreased. Equivalent formula, easier to remember: f v w r = f 0 v w s = 1 λ where v w r is the wave's velocity relative to the receiver; the above formula assumes that the source is either directly approaching or receding from the observer. If the source approaches the observer at an angle, the observed frequency, first heard is higher than the object's emitted frequency.
Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is coming from a direction perpendicular to the relative motion, a continued monotonic decrease as it recedes from the observer. When the observer is close to the path of the object, the transition from high to low frequency is abrupt; when the observer is far from the path of the object, the transition from high to low frequency is gradual. If the speeds v s and v r are small compared to the speed of the wave, the relationship between observed frequency f and emitted frequency f 0 is where Δ f = f
The velocity of an object is the rate of change of its position with respect to a frame of reference, is a function of time. Velocity is equivalent to a specification of an object's direction of motion. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a physical vector quantity; the scalar absolute value of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI as metres per second or as the SI base unit of. For example, "5 metres per second" is a scalar. If there is a change in speed, direction or both the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes.
Hence, the car is considered to be undergoing an acceleration. Speed describes only how fast an object is moving, whereas velocity gives both how fast it is and in which direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified. However, if the car is said to move at 60 km/h to the north, its velocity has now been specified; the big difference can be noticed. When something moves in a circular path and returns to its starting point, its average velocity is zero but its average speed is found by dividing the circumference of the circle by the time taken to move around the circle; this is because the average velocity is calculated by only considering the displacement between the starting and the end points while the average speed considers only the total distance traveled. Velocity is defined as the rate of change of position with respect to time, which may be referred to as the instantaneous velocity to emphasize the distinction from the average velocity.
In some applications the "average velocity" of an object might be needed, to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, v, over some time period Δt. Average velocity can be calculated as: v ¯ = Δ x Δ t; the average velocity is always equal to the average speed of an object. This can be seen by realizing that while distance is always increasing, displacement can increase or decrease in magnitude as well as change direction. In terms of a displacement-time graph, the instantaneous velocity can be thought of as the slope of the tangent line to the curve at any point, the average velocity as the slope of the secant line between two points with t coordinates equal to the boundaries of the time period for the average velocity; the average velocity is the same as the velocity averaged over time –, to say, its time-weighted average, which may be calculated as the time integral of the velocity: v ¯ = 1 t 1 − t 0 ∫ t 0 t 1 v d t, where we may identify Δ x = ∫ t 0 t 1 v d t and Δ t = t 1 − t 0.
If we consider v as velocity and x as the displacement vector we can express the velocity of a particle or object, at any particular time t, as the derivative of the position with respect to time: v = lim Δ t → 0 Δ x Δ t = d x d t. From this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time is the displacement, x. In calculus terms, the integral of the velocity function v is the displacement function x. In the figure, this corresponds to the yellow area under the curve labeled s. X = ∫ v d t. Since the derivative of the position with respect to time gives the change in position divided by the change in time, velocity is measured in metres per second. Although the concept of an instantaneous velocity might at first seem counter-intuitive, it
Digital signal processor
A digital signal processor is a specialized microprocessor, with its architecture optimized for the operational needs of digital signal processing. The goal of DSP is to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can execute digital signal processing algorithms but may not be able to keep up with such processing continuously in real-time. Dedicated DSPs have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs use special memory architectures that are able to fetch multiple data or instructions at the same time. Digital signal processing algorithms require a large number of mathematical operations to be performed and on a series of data samples. Signals are converted from analog to digital, manipulated digitally, converted back to analog form. Many DSP applications have constraints on latency. Most general-purpose microprocessors and operating systems can execute DSP algorithms but are not suitable for use in portable devices such as mobile phones and PDAs because of power efficiency constraints.
A specialized digital signal processor, will tend to provide a lower-cost solution, with better performance, lower latency, no requirements for specialised cooling or large batteries. Such performance improvements have led to the introduction of digital signal processing in commercial communications satellites where hundreds or thousands of analog filters, frequency converters and so on are required to receive and process the uplinked signals and ready them for downlinking, can be replaced with specialised DSPs with a significant benefits to the satellites' weight, power consumption, complexity/cost of construction and flexibility of operation. For example, the SES-12 and SES-14 satellites from operator SES, both intended for launch in 2017, were built by Airbus Defence and Space with 25% of capacity using DSP; the architecture of a digital signal processor is optimized for digital signal processing. Most support some of the features as an applications processor or microcontroller, since signal processing is the only task of a system.
Some useful features for optimizing DSP algorithms are outlined below. By the standards of general-purpose processors, DSP instruction sets are highly irregular. Both traditional and DSP-optimized instruction sets are able to compute any arbitrary operation but an operation that might require multiple ARM or x86 instructions to compute might require only one instruction in a DSP optimized instruction set. One implication for software architecture is that hand-optimized assembly-code routines are packaged into libraries for re-use, instead of relying on advanced compiler technologies to handle essential algorithms. With modern compiler optimizations hand-optimized assembly code is more efficient and many common algorithms involved in DSP calculations are hand-written in order to take full advantage of the architectural optimizations. Multiply–accumulates operations used extensively in all kinds of matrix operations convolution for filtering dot product polynomial evaluation Fundamental DSP algorithms depend on multiply–accumulate performance FIR filters Fast Fourier transform Instructions to increase parallelism: SIMD VLIW superscalar architecture Specialized instructions for modulo addressing in ring buffers and bit-reversed addressing mode for FFT cross-referencing Digital signal processors sometimes use time-stationary encoding to simplify hardware and increase coding efficiency.
Multiple arithmetic units may require memory architectures to support several accesses per instruction cycle Special loop controls, such as architectural support for executing a few instruction words in a tight loop without overhead for instruction fetches or exit testing Saturation arithmetic, in which operations that produce overflows will accumulate at the maximum values that the register can hold rather than wrapping around. Sometimes various sticky bits operation modes are available. Fixed-point arithmetic is used to speed up arithmetic processing Single-cycle operations to increase the benefits of pipelining Floating-point unit integrated directly into the datapath Pipelined architecture Highly parallel multiplier–accumulators Hardware-controlled looping, to reduce or eliminate the overhead required for looping operations In engineering, hardware architecture refers to the identification of a system's physical components and their interrelationships; this description called a hardware design model, allows hardware designers to understand how their components fit into a system architecture and provides to software component designers important information needed for software development and integration.
Clear definition of a hardware architecture allows the various traditional engineering disciplines to work more together to develop and manufacture new machines and components. Hardware is als
Teledyne Technologies, Inc. is an American industrial conglomerate based in the United States but with global operations. It was founded as Teledyne, Inc. by Henry Singleton and George Kozmetsky. From August 1996 to November 1999, Teledyne existed as part of the conglomerate Allegheny Teledyne Incorporated – a combination of the former Teledyne, Inc. and the former Allegheny Ludlum Corporation. On November 29, 1999, three separate entities, Teledyne Technologies, Allegheny Technologies, Water Pik Technologies, were spun off as free-standing public companies. Allegheny Technologies retained several companies of the former Teledyne, Inc. that fit with Allegheny's core business of steel and exotic metals production. At various times, Inc. had more than 150 companies with interests as varied as insurance, dental appliances, specialty metals, aerospace electronics, but many of these had been divested prior to the merger with Allegheny. The new Teledyne Technologies was composed of 19 companies that were earlier in Teledyne, Inc.
By 2011, Teledyne Technologies had grown to include nearly 100 companies. Teledyne Technologies operates with four major segments: Digital Imaging, Engineered Systems, Aerospace and Defense Electronics; this segment handles sponsored and central research laboratories for a range of new technologies, as well as development and production efforts in digital imaging products for government applications. Included are infrared detectors and opto-mechanical assemblies; this segment provides monitoring and control instruments for marine, scientific and defense applications as well as harsh environment interconnect products. This segment provides systems engineering and integration, advanced technology application, software development, manufacturing solutions to space, environmental, chemical and nuclear systems, missile defense requirements, it designs and manufactures hydrogen gas generators and fuel-based power sources, small turbine engines. This segment provides complex electronic components and subsystems for communication products, including defense electronics, data acquisition and communications equipment for air transport and business aircraft, components and subsystems for wireless and satellite communications, as well as general aviation batteries.
As of February 2016, Teledyne Technologies listed the following companies: Teledyne Advanced Pollution Instrumentation Teledyne Anafocus Teledyne Analytical Instruments Teledyne Battery Products Teledyne Benthos Teledyne BlueView Teledyne Brown Engineering Teledyne Brown CollaborX Teledyne CML Teledyne CARIS Teledyne Controls Teledyne Cormon Teledyne Cougar Teledyne D. G. O'Brien Teledyne DALSA Teledyne Defence & Space Teledyne e2v Teledyne Electronic Manufacturing Services Teledyne Electronic Safety Products Teledyne Electronics & Communications Teledyne Energy Systems Teledyne Energetics Teledyne Europe Teledyne Gavia ehf. Teledyne Geophysical Instruments Teledyne Hastings Instruments Teledyne Imaging Sensors Teledyne Impulse Teledyne Instruments Teledyne Interconnect Devices Teledyne ISCO Teledyne Judson Technologies Teledyne KW Microwave Teledyne Labtech Teledyne LeCroy Teledyne Leeman Labs Teledyne Lighting & Display Products Teledyne Marine Teledyne MEC Teledyne Microelectronic Technologies Teledyne Microwave Teledyne Monitor Labs Teledyne Ocean Designs, Inc.
Teledyne ODI, Inc. Teledyne Odom Hydrographic Teledyne Optech Teledyne Paradise Datacom Teledyne Printed Circuit Technology Teledyne RESON Teledyne RD Instruments BlueView Technologies Teledyne Relays Teledyne Reynolds Teledyne Reynolds, a Division of Teledyne Limited Teledyne RISI Teledyne Scientific and Imaging Teledyne Scientific Company Teledyne SeaBotix Teledyne Storm Products, Cable Solutions Group in Dallas Teledyne Storm Products, Microwave in Chicago Teledyne TapTone Teledyne Tekmar Company Teledyne Test Services Teledyne TSS Teledyne Turbine Engines Teledyne VariSystems Teledyne Webb ResearchSome companies in Teledyne Technologies include the following: Acoustic Research Continental Motors, Inc. Laars Mattituck Services Ryan Aeronautical Wisconsin Motors In June 1960, Henry Singleton and George Kozmetsky, both executives with Litton Industries, formed a firm named Instrument Systems located in Beverly Hills, California. Arthur Rock, one of America's first and most successful venture capitalists, financed the startup with a $450,000 investment.
Their basic plan was to build a major firm centering on microelectronics and control system development through acquiring existing companies. In October 1960, the first acquisition was made by purchasing the majority of stock in Amelco, a small electronics manufacturing plant. Within a short time, rights to the name Teledyne and its associated logo were bought. In addition to Amelco, two other electronics manufacturing firms were acquired, by the end of 1960, Teledyne had about 400 employees and 80,000 square feet of floor space devoted to engineering development and manufacturing. Teledyne stock was first offered to the public in May 1961. During its first full fiscal year of operations ending in October 1961, Teledyne had sales of $4,491,000 with a net income of $58,000. Teledyne’s growth continued in 1962, with the acquisition of companies through equity agreements. Internally, Teledyne Systems was formed as the centerpiece of the firm’s aerospace systems business, diversifying the business
Piezoelectricity is the electric charge that accumulates in certain solid materials in response to applied mechanical stress. The word piezoelectricity means electricity resulting from latent heat, it is derived from the Greek word πιέζειν. French physicists Jacques and Pierre Curie discovered piezoelectricity in 1880; the piezoelectric effect results from the linear electromechanical interaction between the mechanical and electrical states in crystalline materials with no inversion symmetry. The piezoelectric effect is a reversible process: materials exhibiting the piezoelectric effect exhibit the reverse piezoelectric effect, the internal generation of a mechanical strain resulting from an applied electrical field. For example, lead zirconate titanate crystals will generate measurable piezoelectricity when their static structure is deformed by about 0.1% of the original dimension. Conversely, those same crystals will change about 0.1% of their static dimension when an external electric field is applied to the material.
The inverse piezoelectric effect is used in the production of ultrasonic sound waves. Piezoelectricity is exploited in a number of useful applications, such as the production and detection of sound, piezoelectric inkjet printing, generation of high voltages, electronic frequency generation, microbalances, to drive an ultrasonic nozzle, ultrafine focusing of optical assemblies, it forms the basis for a number of scientific instrumental techniques with atomic resolution, the scanning probe microscopies, such as STM, AFM, MTA, SNOM. It finds everyday uses such as acting as the ignition source for cigarette lighters, push-start propane barbecues, used as the time reference source in quartz watches, in amplification pickups for some guitars; the pyroelectric effect, by which a material generates an electric potential in response to a temperature change, was studied by Carl Linnaeus and Franz Aepinus in the mid-18th century. Drawing on this knowledge, both René Just Haüy and Antoine César Becquerel posited a relationship between mechanical stress and electric charge.
The first demonstration of the direct piezoelectric effect was in 1880 by the brothers Pierre Curie and Jacques Curie. They combined their knowledge of pyroelectricity with their understanding of the underlying crystal structures that gave rise to pyroelectricity to predict crystal behavior, demonstrated the effect using crystals of tourmaline, topaz, cane sugar, Rochelle salt. Quartz and Rochelle salt exhibited the most piezoelectricity; the Curies, did not predict the converse piezoelectric effect. The converse effect was mathematically deduced from fundamental thermodynamic principles by Gabriel Lippmann in 1881; the Curies confirmed the existence of the converse effect, went on to obtain quantitative proof of the complete reversibility of electro-elasto-mechanical deformations in piezoelectric crystals. For the next few decades, piezoelectricity remained something of a laboratory curiosity. More work was done to define the crystal structures that exhibited piezoelectricity; this culminated in 1910 with the publication of Woldemar Voigt's Lehrbuch der Kristallphysik, which described the 20 natural crystal classes capable of piezoelectricity, rigorously defined the piezoelectric constants using tensor analysis.
The first practical application for piezoelectric devices was sonar, first developed during World War I. In France in 1917, Paul Langevin and his coworkers developed an ultrasonic submarine detector; the detector consisted of a transducer, made of thin quartz crystals glued between two steel plates, a hydrophone to detect the returned echo. By emitting a high-frequency pulse from the transducer, measuring the amount of time it takes to hear an echo from the sound waves bouncing off an object, one can calculate the distance to that object; the use of piezoelectricity in sonar, the success of that project, created intense development interest in piezoelectric devices. Over the next few decades, new piezoelectric materials and new applications for those materials were explored and developed. Piezoelectric devices found homes in many fields. Ceramic phonograph cartridges simplified player design, were cheap and accurate, made record players cheaper to maintain and easier to build; the development of the ultrasonic transducer allowed for easy measurement of viscosity and elasticity in fluids and solids, resulting in huge advances in materials research.
Ultrasonic time-domain reflectometers could find flaws inside cast metal and stone objects, improving structural safety. During World War II, independent research groups in the United States and Japan discovered a new class of synthetic materials, called ferroelectrics, which exhibited piezoelectric constants many times higher than natural materials; this led to intense research to develop barium titanate and lead zirconate titanate materials with specific properties for particular applications. One significant example of the use of piezoelectric crystals was developed by Bell Telephone Laboratories. Following World War I, Frederick R. Lack, working in radio telephony in the engineering department, developed the “AT cut” crystal, a crystal that operated through a wide range of temperatures. Lack's crystal didn't nee