Iron is a chemical element with symbol Fe and atomic number 26. It is a metal, that belongs to group 8 of the periodic table, it is by mass the most common element on Earth, forming much of Earth's inner core. It is the fourth most common element in the Earth's crust. Pure iron is rare on the Earth's crust being limited to meteorites. Iron ores are quite abundant, but extracting usable metal from them requires kilns or furnaces capable of reaching 1500 °C or higher, about 500 °C higher than what is enough to smelt copper. Humans started to dominate that process in Eurasia only about 2000 BCE, iron began to displace copper alloys for tools and weapons, in some regions, only around 1200 BCE; that event is considered the transition from the Bronze Age to the Iron Age. Iron alloys, such as steel and special steels are now by far the most common industrial metals, because of their mechanical properties and their low cost. Pristine and smooth pure iron surfaces are mirror-like silvery-gray. However, iron reacts with oxygen and water to give brown to black hydrated iron oxides known as rust.
Unlike the oxides of some other metals, that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing fresh surfaces for corrosion. The body of an adult human contains about 3 to 5 grams of elemental iron in hemoglobin and myoglobin; these two proteins play essential roles in vertebrate metabolism oxygen transport by blood and oxygen storage in muscles. To maintain the necessary levels, human iron metabolism requires a minimum of iron in the diet. Iron is the metal at the active site of many important redox enzymes dealing with cellular respiration and oxidation and reduction in plants and animals. Chemically, the most common oxidation states of iron are +2 and +3. Iron shares many properties of other transition metals, including the other group 8 elements and osmium. Iron forms compounds in a wide range of oxidation states, −2 to +7. Iron forms many coordination compounds. At least four allotropes of iron are known, conventionally denoted α, γ, δ, ε; the first three forms are observed at ordinary pressures.
As molten iron cools past its freezing point of 1538 °C, it crystallizes into its δ allotrope, which has a body-centered cubic crystal structure. As it cools further to 1394 °C, it changes to its γ-iron allotrope, a face-centered cubic crystal structure, or austenite. At 912 °C and below, the crystal structure again becomes the bcc α-iron allotrope; the physical properties of iron at high pressures and temperatures have been studied extensively, because of their relevance to theories about the cores of the Earth and other planets. Above 10 GPa and temperatures of a few hundred kelvin or less, α-iron changes into another hexagonal close-packed structure, known as ε-iron; the higher-temperature γ-phase changes into ε-iron, but does so at higher pressure. Some controversial experimental evidence exists for a stable β phase at pressures above 50 GPa and temperatures of at least 1500 K, it is supposed to have a double hcp structure. The inner core of the Earth is presumed to consist of an iron-nickel alloy with ε structure.
The melting and boiling points of iron, along with its enthalpy of atomization, are lower than those of the earlier 3d elements from scandium to chromium, showing the lessened contribution of the 3d electrons to metallic bonding as they are attracted more and more into the inert core by the nucleus. This same trend appears for ruthenium but not osmium; the melting point of iron is experimentally well defined for pressures less than 50 GPa. For greater pressures, published data still varies by tens of gigapascals and over a thousand kelvin. Below its Curie point of 770 °C, α-iron changes from paramagnetic to ferromagnetic: the spins of the two unpaired electrons in each atom align with the spins of its neighbors, creating an overall magnetic field; this happens because the orbitals of those two electrons do not point toward neighboring atoms in the lattice, therefore are not involved in metallic bonding. In the absence of an external source of magnetic field, the atoms get spontaneously partitioned into magnetic domains, about 10 micrometres across, such that the atoms in each domain have parallel spins, but different domains have other orientations.
Thus a macroscopic piece of iron will have a nearly zero overall magnetic field. Application of an external magnetic field causes the domains that are magnetized in the same general direction to grow at the expense of adjacent ones that point in other directions, reinforcing the external field; this effect is exploited in devices that needs to channel magnetic fields, such as electrical transformers, magnetic recording heads, electric motors. Impurities, lattice defects, or grain and particle boundaries can "pin" the domains in the new positions, so that the effect persists after the external field is removed -- thus turning the iron object into a magnet. Similar behavior is exhibited by some iron compounds, such as the fer
In physics, cryogenics is the production and behaviour of materials at low temperatures. A person who studies elements that have been subjected to cold temperatures is called a cryogenicist, it is not well-defined at what point on the temperature scale refrigeration ends and cryogenics begins, but scientists assume a gas to be cryogenic if it can be liquefied at or below −150 °C. The U. S. National Institute of Standards and Technology has chosen to consider the field of cryogenics as that involving temperatures below −180 °C; this is a logical dividing line, since the normal boiling points of the so-called permanent gases lie below −180 °C while the Freon refrigerants and other common refrigerants have boiling points above −180 °C. Discovery of superconducting materials with critical temperatures above the boiling point of liquid nitrogen has provided new interest in reliable, low cost methods of producing high temperature cryogenic refrigeration; the term "high temperature cryogenic" describes temperatures ranging from above the boiling point of liquid nitrogen, −195.79 °C, up to −50 °C, the defined upper limit of study referred to as cryogenics.
Cryogenicists use the Kelvin or Rankine temperature scale, both of which measure from absolute zero, rather than more usual scales such as Celsius or Fahrenheit, with their zeroes at arbitrary temperatures. Cryogenics The branches of engineering that involve the study of low temperatures, how to produce them, how materials behave at those temperatures. Cryobiology The branch of biology involving the study of the effects of low temperatures on organisms. Cryoconservation of animal genetic resources The conservation of genetic material with the intention of conserving a breed. Cryosurgery The branch of surgery applying cryogenic temperatures to destroy malignant tissue, e.g. cancer cells. Cryoelectronics The study of electronic phenomena at cryogenic temperatures. Examples include variable-range hopping. Cryotronics The practical application of cryoelectronics. Cryonics Cryopreserving humans and animals with the intention of future revival. "Cryogenics" is sometimes erroneously used to mean "Cryonics" in the press.
The word cryogenics stems from Greek κρύο – "cold" + γονική – "having to do with production". Cryogenic fluids with their boiling point in kelvins. Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most used element in cryogenics and is purchasable around the world. Liquid helium is commonly used and allows for the lowest attainable temperatures to be reached; these liquids may be stored in Dewar flasks, which are double-walled containers with a high vacuum between the walls to reduce heat transfer into the liquid. Typical laboratory Dewar flasks are spherical, made of glass and protected in a metal outer container. Dewar flasks for cold liquids such as liquid helium have another double-walled container filled with liquid nitrogen. Dewar flasks are named after James Dewar, the man who first liquefied hydrogen. Thermos bottles are smaller vacuum flasks fitted in a protective casing. Cryogenic barcode labels are used to mark Dewar flasks containing these liquids, will not frost over down to −195 degrees Celsius.
Cryogenic transfer pumps are the pumps used on LNG piers to transfer liquefied natural gas from LNG carriers to LNG storage tanks, as are cryogenic valves. The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening, the commercial cryogenic processing industry was founded in 1966 by Ed Busch. With a background in the heat treating industry, Busch founded a company in Detroit called CryoTech in 1966 which merged with 300 Below in 1999 to become the world's largest and oldest commercial cryogenic processing company. Busch experimented with the possibility of increasing the life of metal tools to anywhere between 200% and 400% of the original life expectancy using cryogenic tempering instead of heat treating; this evolved in the late 1990s into the treatment of other parts. Cryogens, such as liquid nitrogen, are further used for specialty chilling and freezing applications.
Some chemical reactions, like those used to produce the active ingredients for the popular statin drugs, must occur at low temperatures of −100 °C. Special cryogenic chemical reactors are used to remove reaction heat and provide a low temperature environment; the freezing of foods and biotechnology products, like vaccines, requires nitrogen in blast freezing or immersion freezing systems. Certain soft or elastic materials become hard and brittle at low temperatures, which makes cryogenic milling an option for some materials that cannot be milled at higher temperatures. Cryogenic processing is not a substitute for heat treatment, but rather an extension of the heating–quenching–tempering cycle; when an item is quenched, the final temperature is ambient. The only reason for this is. There is nothing metallurgically significant about ambient temperature; the cryogenic process continues this action from ambient temperature down to −320 °F. In most instances the cryogenic cycle is followed by a heat tempering procedure.
As all alloys do not have the same chemical constituents, the tempering procedure varies according to the material's chemical composition, t
Ti:sapphire lasers are tunable lasers which emit red and near-infrared light in the range from 650 to 1100 nanometers. These lasers are used in scientific research because of their tunability and their ability to generate ultrashort pulses. Lasers based on Ti:sapphire were first constructed and invented in June 1982 by Peter Moulton at the MIT Lincoln Laboratory. Titanium-sapphire refers to the lasing medium, a crystal of sapphire, doped with titanium ions. A Ti:sapphire laser is pumped with another laser with a wavelength of 514 to 532 nm, for which argon-ion lasers and frequency-doubled Nd:YAG, Nd:YLF, Nd:YVO lasers are used. Ti:sapphire lasers operate most efficiently at wavelengths near 800 nm. Mode-locked oscillators generate ultrashort pulses with a typical duration between a few picoseconds and 10 femtoseconds, in special cases around 5 femtoseconds; the pulse repetition frequency is in most cases around 70 to 90 MHz. Ti:sapphire oscillators are pumped with a continuous-wave laser beam from an argon or frequency-doubled Nd:YVO4 laser.
Such an oscillator has an average output power of 0.4 to 2.5 watts. These devices generate ultrashort, ultra-high-intensity pulses with a duration of 20 to 100 femtoseconds. A typical one stage amplifier can produce pulses of up to 5 millijoules in energy at a repetition frequency of 1000 hertz, while a larger, multistage facility can produce pulses up to several joules, with a repetition rate of up to 10 Hz. Amplifiers crystals are pumped with a pulsed frequency-doubled Nd:YLF laser at 527 nm and operate at 800 nm. Two different designs exist for the amplifier: multi-pass amplifier. Regenerative amplifiers operate by amplifying single pulses from an oscillator. Instead of a normal cavity with a reflective mirror, they contain high-speed optical switches that insert a pulse into a cavity and take the pulse out of the cavity at the right moment when it has been amplified to a high intensity; the term'chirped-pulse' refers to a special construction, necessary to prevent the pulse from damaging the components in the laser.
The pulse is stretched in time so that the energy is not all located at the same point in time and space. This prevents damage to the optics in the amplifier; the pulse is optically amplified and recompressed in time to form a short, localized pulse. All optics. In a multi-pass amplifier, there are no optical switches. Instead, mirrors guide the beam a fixed number of times through the Ti:sapphire crystal with different directions. A pulsed pump beam can be multi-passed through the crystal, so that more and more passes pump the crystal. First the pump beam pumps a spot in the gain medium; the signal beam first passes through the center for maximal amplification, but in passes the diameter is increased to stay below the damage-threshold, to avoid amplification the outer parts of the beam, thus increasing beam quality and cutting off some amplified spontaneous emission and to deplete the inversion in the gain medium. The pulses from chirped-pulse amplifiers are converted to other wavelengths by means of various nonlinear optical processes.
At 5 mJ in 100 femtoseconds, the peak power of such a laser is 50 gigawatts. When focused by a lens, these laser pulses will ionise any material placed in the focus, including air molecules. Titanium-sapphire is suitable for pulsed lasers since an ultrashort pulse inherently contains a wide spectrum of frequency components; this is due to the inverse relationship between the frequency bandwidth of a pulse and its time duration, due to their being conjugate variables. However, with an appropriate design, titanium-sapphire can be used in continuous wave lasers with narrow linewidths tunable over a wide range; the Ti:sapphire laser was invented by Peter Moulton in June 1982 at MIT Lincoln Laboratory in its continuous wave version. Subsequently, these lasers were shown to generate ultrashort pulses through Kerr-lens modelocking. Strickland and Mourou, in addition to others, working at the University of Rochester, showed chirped pulse amplification of this laser within a few years, for which these two shared in the 2018 Nobel Prize in physics.
The cumulative product sales of the Ti:sapphire laser has amounted to more than $600 million, making it a big commercial success that has sustained the solid state laser industry for more than three decades. The ultrashort pulses generated by Ti:sapphire lasers in the time domain corresponds to mode-locked optical frequency combs in the spectral domain. Both the temporal and spectral properties of these lasers make them desirable for frequency metrology, spectroscopy, or for pumping nonlinear optical processes. One half of the Nobel prize for physics in 2005 was awarded to the development of the optical frequency comb technique, which relied on the Ti:sapphire laser and its self-modelocking properties; the continuous wave versions of these lasers can be designed to have nearly quantum limited performance, resulting in a low noise and a narrow linewidth, making them attractive for quantum optics experiments. Apart from fundamental science applications in the laboratory, this laser has found biological applications such as deep-tissue multiphoton imaging and industrial applications cold micromachining.
When operated in the chirped pulse amplification mode, they can be used to generate high peak powers in the terawatt range, which finds use in nuclear fusion research. Encyclopedia of laser physics and te
In biology, tissue is a cellular organizational level between cells and a complete organ. A tissue is an ensemble of similar cells and their extracellular matrix from the same origin that together carry out a specific function. Organs are formed by the functional grouping together of multiple tissues; the English word "tissue" is derived from the French "tissu", meaning something, "woven", from the verb tisser, "to weave". The study of human and animal tissues is known as histology or, in connection with disease, histopathology. For plants, the discipline is called plant anatomy; the classical tools for studying tissues are the paraffin block in which tissue is embedded and sectioned, the histological stain, the optical microscope. In the last couple of decades, developments in electron microscopy, immunofluorescence, the use of frozen tissue sections have enhanced the detail that can be observed in tissues. With these tools, the classical appearances of tissues can be examined in health and disease, enabling considerable refinement of medical diagnosis and prognosis.
Animal tissues are grouped into four basic types: connective, muscle and epithelial. Collections of tissues joined in structural units to serve a common function compose organs. While all eumetazoan animals can be considered to contain the four tissue types, the manifestation of these tissues can differ depending on the type of organism. For example, the origin of the cells comprising a particular tissue type may differ developmentally for different classifications of animals; the epithelium in all birds and animals is derived from the ectoderm and endoderm, with a small contribution from the mesoderm, forming the endothelium, a specialized type of epithelium that composes the vasculature. By contrast, a true epithelial tissue is present only in a single layer of cells held together via occluding junctions called tight junctions, to create a selectively permeable barrier; this tissue covers all organismal surfaces that come in contact with the external environment such as the skin, the airways, the digestive tract.
It serves functions of protection and absorption, is separated from other tissues below by a basal lamina. Connective tissues are fibrous tissues, they are made up of cells separated by non-living material, called an extracellular matrix. This matrix can be rigid. For example, blood contains plasma as its matrix and bone's matrix is rigid. Connective tissue holds them in place. Blood, tendon, ligament and areolar tissues are examples of connective tissues. One method of classifying connective tissues is to divide them into three types: fibrous connective tissue, skeletal connective tissue, fluid connective tissue. Muscle cells form the active contractile tissue of the body known as muscle tissue or muscular tissue. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle tissue is separated into three distinct categories: visceral or smooth muscle, found in the inner linings of organs. Cells comprising the central nervous system and peripheral nervous system are classified as nervous tissue.
In the central nervous system, neural tissues form spinal cord. In the peripheral nervous system, neural tissues form the cranial nerves and spinal nerves, inclusive of the motor neurons; the epithelial tissues are formed by cells that cover the organ surfaces, such as the surface of skin, the airways, the reproductive tract, the inner lining of the digestive tract. The cells comprising an epithelial layer are linked via tight junctions. In addition to this protective function, epithelial tissue may be specialized to function in secretion and absorption. Epithelial tissue helps to protect organs from microorganisms and fluid loss. Functions of epithelial tissue: The cells of the body's surface form the outer layer of skin. Inside the body, epithelial cells form the lining of the mouth and alimentary canal and protect these organs. Epithelial tissues help in absorption of water and nutrients. Epithelial tissues help in the elimination of waste. Epithelial tissues hormones in the form of glands; some epithelial tissue perform secretory functions.
They secrete a variety of substances such as sweat, enzymes, etc. There are many kinds of epithelium, nomenclature is somewhat variable. Most classification schemes combine a description of the cell-shape in the upper layer of the epithelium with a word denoting the number of layers: either simple or stratified. However, other cellular features, such as cilia may be described in the classification system; some common kinds of epithelium are listed below: Simple squamous epithelium Stratified squamous epithelium Simple cuboidal epithelium Transitional epithelium Pseudostratified columnar epithelium Columnar epithelium Glandular epithelium Ciliated columnar epithelium In plant anatomy, tissues are categorized broadly into three tissue systems: the epidermis, the ground tissue, the vascular tissue. Epidermis - Cells forming the outer surface of the leaves and of the young plant body. Vascular tissue - The primary components of vascular tissue are the xylem and phloem; these transport nutrients internally.
Ground tissue - Ground tissue is less differentiated than other tissues. Ground tis
Spectroscopy is the study of the interaction between matter and electromagnetic radiation. Spectroscopy originated through the study of visible light dispersed according to its wavelength, by a prism; the concept was expanded to include any interaction with radiative energy as a function of its wavelength or frequency, predominantly in the electromagnetic spectrum, though matter waves and acoustic waves can be considered forms of radiative energy. Spectroscopic data are represented by an emission spectrum, a plot of the response of interest as a function of wavelength or frequency. Spectroscopy in the electromagnetic spectrum, is a fundamental exploratory tool in the fields of physics and astronomy, allowing the composition, physical structure and electronic structure of matter to be investigated at atomic scale, molecular scale, macro scale, over astronomical distances. Important applications arise from biomedical spectroscopy in the areas of tissue analysis and medical imaging. Spectroscopy and spectrography are terms used to refer to the measurement of radiation intensity as a function of wavelength and are used to describe experimental spectroscopic methods.
Spectral measurement devices are referred to as spectrometers, spectrophotometers, spectrographs or spectral analyzers. Daily observations of color can be related to spectroscopy. Neon lighting is a direct application of atomic spectroscopy. Neon and other noble gases have characteristic emission frequencies. Neon lamps use collision of electrons with the gas to excite these emissions. Inks and paints include chemical compounds selected for their spectral characteristics in order to generate specific colors and hues. A encountered molecular spectrum is that of nitrogen dioxide. Gaseous nitrogen dioxide has a characteristic red absorption feature, this gives air polluted with nitrogen dioxide a reddish-brown color. Rayleigh scattering is a spectroscopic scattering phenomenon. Spectroscopic studies were central to the development of quantum mechanics and included Max Planck's explanation of blackbody radiation, Albert Einstein's explanation of the photoelectric effect and Niels Bohr's explanation of atomic structure and spectra.
Spectroscopy is used in physical and analytical chemistry because atoms and molecules have unique spectra. As a result, these spectra can be used to detect and quantify information about the atoms and molecules. Spectroscopy is used in astronomy and remote sensing on Earth. Most research telescopes have spectrographs; the measured spectra are used to determine the chemical composition and physical properties of astronomical objects. One of the central concepts in spectroscopy is its corresponding resonant frequency. Resonances were first characterized in mechanical systems such as pendulums. Mechanical systems that vibrate or oscillate will experience large amplitude oscillations when they are driven at their resonant frequency. A plot of amplitude vs. excitation frequency will have a peak centered at the resonance frequency. This plot is one type of spectrum, with the peak referred to as a spectral line, most spectral lines have a similar appearance. In quantum mechanical systems, the analogous resonance is a coupling of two quantum mechanical stationary states of one system, such as an atom, via an oscillatory source of energy such as a photon.
The coupling of the two states is strongest when the energy of the source matches the energy difference between the two states. The energy of a photon is related to its frequency by E = h ν where h is Planck's constant, so a spectrum of the system response vs. photon frequency will peak at the resonant frequency or energy. Particles such as electrons and neutrons have a comparable relationship, the de Broglie relations, between their kinetic energy and their wavelength and frequency and therefore can excite resonant interactions. Spectra of atoms and molecules consist of a series of spectral lines, each one representing a resonance between two different quantum states; the explanation of these series, the spectral patterns associated with them, were one of the experimental enigmas that drove the development and acceptance of quantum mechanics. The hydrogen spectral series in particular was first explained by the Rutherford-Bohr quantum model of the hydrogen atom. In some cases spectral lines are well separated and distinguishable, but spectral lines can overlap and appear to be a single transition if the density of energy states is high enough.
Named series of lines include the principal, sharp and fundamental series. Spectroscopy is a sufficiently broad field that many sub-disciplines exist, each with numerous implementations of specific spectroscopic techniques; the various implementations and techniques can be classified in several ways. The types of spectroscopy are distinguished by the type of radiative energy involved in the interaction. In many applications, the spectrum is determined by measuring changes in the intensity or frequency of this energy; the types of radiative energy studied include: Electromagnetic radiation was the first source of energy used for spectroscopic studies. Techniques that employ electromagnetic radiation are classified by the wavelength region of the spectrum and include microwave, terahe
In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. It is thus the inverse of the spatial frequency. Wavelength is determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. Wavelength is designated by the Greek letter lambda; the term wavelength is sometimes applied to modulated waves, to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids. Assuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to frequency of the wave: waves with higher frequencies have shorter wavelengths, lower frequencies have longer wavelengths. Wavelength depends on the medium. Examples of wave-like phenomena are sound waves, water waves and periodic electrical signals in a conductor.
A sound wave is a variation in air pressure, while in light and other electromagnetic radiation the strength of the electric and the magnetic field vary. Water waves are variations in the height of a body of water. In a crystal lattice vibration, atomic positions vary. Wavelength is a measure of the distance between repetitions of a shape feature such as peaks, valleys, or zero-crossings, not a measure of how far any given particle moves. For example, in sinusoidal waves over deep water a particle near the water's surface moves in a circle of the same diameter as the wave height, unrelated to wavelength; the range of wavelengths or frequencies for wave phenomena is called a spectrum. The name originated with the visible light spectrum but now can be applied to the entire electromagnetic spectrum as well as to a sound spectrum or vibration spectrum. In linear media, any wave pattern can be described in terms of the independent propagation of sinusoidal components; the wavelength λ of a sinusoidal waveform traveling at constant speed v is given by λ = v f, where v is called the phase speed of the wave and f is the wave's frequency.
In a dispersive medium, the phase speed itself depends upon the frequency of the wave, making the relationship between wavelength and frequency nonlinear. In the case of electromagnetic radiation—such as light—in free space, the phase speed is the speed of light, about 3×108 m/s, thus the wavelength of a 100 MHz electromagnetic wave is about: 3×108 m/s divided by 108 Hz = 3 metres. The wavelength of visible light ranges from deep red 700 nm, to violet 400 nm. For sound waves in air, the speed of sound is 343 m/s; the wavelengths of sound frequencies audible to the human ear are thus between 17 m and 17 mm, respectively. Note that the wavelengths in audible sound are much longer than those in visible light. A standing wave is an undulatory motion. A sinusoidal standing wave includes stationary points of no motion, called nodes, the wavelength is twice the distance between nodes; the upper figure shows three standing waves in a box. The walls of the box are considered to require the wave to have nodes at the walls of the box determining which wavelengths are allowed.
For example, for an electromagnetic wave, if the box has ideal metal walls, the condition for nodes at the walls results because the metal walls cannot support a tangential electric field, forcing the wave to have zero amplitude at the wall. The stationary wave can be viewed as the sum of two traveling sinusoidal waves of oppositely directed velocities. Wavelength and wave velocity are related just as for a traveling wave. For example, the speed of light can be determined from observation of standing waves in a metal box containing an ideal vacuum. Traveling sinusoidal waves are represented mathematically in terms of their velocity v, frequency f and wavelength λ as: y = A cos = A cos where y is the value of the wave at any position x and time t, A is the amplitude of the wave, they are commonly expressed in terms of wavenumber k and angular frequency ω as: y = A cos = A cos in which wavelength and wavenumber are related to velocity and frequency as: k = 2 π λ = 2 π f v = ω
Frequency is the number of occurrences of a repeating event per unit of time. It is referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency; the period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals, radio waves, light. For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics and radio, frequency is denoted by a Latin letter f or by the Greek letter ν or ν; the relation between the frequency and the period T of a repeating event or oscillation is given by f = 1 T.
The SI derived unit of frequency is the hertz, named after the German physicist Heinrich Hertz. One hertz means. If a TV has a refresh rate of 1 hertz the TV's screen will change its picture once a second. A previous name for this unit was cycles per second; the SI unit for period is the second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm equals one hertz. As a matter of convenience and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are described by their frequency instead of period; these used conversions are listed below: Angular frequency denoted by the Greek letter ω, is defined as the rate of change of angular displacement, θ, or the rate of change of the phase of a sinusoidal waveform, or as the rate of change of the argument to the sine function: y = sin = sin = sin d θ d t = ω = 2 π f Angular frequency is measured in radians per second but, for discrete-time signals, can be expressed as radians per sampling interval, a dimensionless quantity.
Angular frequency is larger than regular frequency by a factor of 2π. Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.: y = sin = sin d θ d x = k Wavenumber, k, is the spatial frequency analogue of angular temporal frequency and is measured in radians per meter. In the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has an inverse relationship to the wavelength, λ. In dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave: f = v λ. In the special case of electromagnetic waves moving through a vacuum v = c, where c is the speed of light in a vacuum, this expression becomes: f = c λ; when waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change. Measurement of frequency can done in the following ways, Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period dividing the count by the length of the time period.
For example, if 71 events occur within 15 seconds the frequency is: f = 71 15 s ≈ 4.73 Hz If the number of counts is not large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count; this is called gating error and causes an average error in the calculated frequency of Δ f = 1 2 T