An optical fiber is a flexible, transparent fiber made by drawing glass or plastic to a diameter thicker than that of a human hair. Optical fibers are used most as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths than electrical cables. Fibers are used instead of metal wires. Fibers are used for illumination and imaging, are wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope. Specially designed fibers are used for a variety of other applications, some of them being fiber optic sensors and fiber lasers. Optical fibers include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers.
Multi-mode fibers have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters. Being able to join optical fibers with low loss is important in fiber optic communication; this is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors; the field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics.
The term was coined by Indian physicist Narinder Singh Kapany, acknowledged as the father of fiber optics. Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel Colladon and Jacques Babinet in Paris in the early 1840s. John Tyndall included a demonstration of it in his public lectures in London, 12 years later. Tyndall wrote about the property of total internal reflection in an introductory book about the nature of light in 1870:When the light passes from air into water, the refracted ray is bent towards the perpendicular... When the ray passes from water to air it is bent from the perpendicular... If the angle which the ray in water encloses with the perpendicular to the surface be greater than 48 degrees, the ray will not quit the water at all: it will be reflected at the surface.... The angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is 48°27′, for flint glass it is 38°41′, while for diamond it is 23°42′.
In the late 19th and early 20th centuries, light was guided through bent glass rods to illuminate body cavities. Practical applications such as close internal illumination during dentistry appeared early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. In the 1930s, Heinrich Lamm showed that one could transmit images through a bundle of unclad optical fibers and used it for internal medical examinations, but his work was forgotten. In 1953, Dutch scientist Bram van Heel first demonstrated image transmission through bundles of optical fibers with a transparent cladding; that same year, Harold Hopkins and Narinder Singh Kapany at Imperial College in London succeeded in making image-transmitting bundles with over 10,000 fibers, subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers. Their article titled "A flexible fibrescope, using static scanning" was published in the journal Nature in 1954.
The first practical fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz, C. Wilbur Peters, Lawrence E. Curtiss, researchers at the University of Michigan, in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers. A variety of other image transmission applications soon followed. Kapany coined the term fiber optics, wrote a 1960 article in Scientific American that introduced the topic to a wide audience, wrote the first book about the new field; the first working fiber-optical data transmission system was demonstrated by German physicist Manfred Börner at Telefunken Research Labs in Ulm in 1965, followed by the first patent application for this technology in 1966. NASA used fiber optics in the television cameras. At the time, the use in the cameras was classified confidential, employees handling the cameras had to be supervised by someone with an appropriate security clearance. Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables were the first, in 1965, to promote the idea that the attenuation in optical fibers could be reduced below 20 decibels per kilometer, making fibers a practical communication medium.
They proposed th
An atomic clock is a clock device that uses an electron transition frequency in the microwave, optical, or ultraviolet region of the electromagnetic spectrum of atoms as a frequency standard for its timekeeping element. Atomic clocks are the most accurate time and frequency standards known, are used as primary standards for international time distribution services, to control the wave frequency of television broadcasts, in global navigation satellite systems such as GPS; the principle of operation of an atomic clock is based on atomic physics. Early atomic clocks were based on masers at room temperature. Since 2004, more accurate atomic clocks first cool the atoms to near absolute zero temperature by slowing them with lasers and probing them in atomic fountains in a microwave-filled cavity. An example of this is the NIST-F1 atomic clock, one of the national primary time and frequency standards of the United States; the accuracy of an atomic clock depends on two factors. The first factor is temperature of the sample atoms—colder atoms move much more allowing longer probe times.
The second factor is the frequency and intrinsic width of the electronic transition. Higher frequencies and narrow lines increase the precision. National standards agencies in many countries maintain a network of atomic clocks which are intercompared and kept synchronized to an accuracy of 10−9 seconds per day; these clocks collectively define the International Atomic Time. For civil time, another time scale is disseminated, Coordinated Universal Time. UTC is derived from TAI, but has added leap seconds from UT1, to account for variations in the rotation of the Earth with respect to the solar time; the idea of using atomic transitions to measure time was suggested by Lord Kelvin in 1879. Magnetic resonance, developed in the 1930s by Isidor Rabi, became the practical method for doing this. In 1945, Rabi first publicly suggested that atomic beam magnetic resonance might be used as the basis of a clock; the first atomic clock was an ammonia absorption line device at 23870.1 MHz built in 1949 at the U.
S. National Bureau of Standards, it served to demonstrate the concept. The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by Louis Essen and Jack Parry in 1955 at the National Physical Laboratory in the UK. Calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time. In 1967, this led the scientific community to redefine the Second in terms of a specific atomic frequency. Equality of the ET second with the SI second has been verified to within 1 part in 1010; the SI second thus inherits the effect of decisions by the original designers of the ephemeris time scale, determining the length of the ET second. Since the beginning of development in the 1950s, atomic clocks have been based on the hyperfine transitions in hydrogen-1, caesium-133, rubidium-87; the first commercial atomic clock was the Atomichron, manufactured by the National Company. More than 50 were sold between 1956 and 1960.
This bulky and expensive instrument was subsequently replaced by much smaller rack-mountable devices, such as the Hewlett-Packard model 5060 caesium frequency standard, released in 1964. In the late 1990s four factors contributed to major advances in clocks: Laser cooling and trapping of atoms So-called high-finesse Fabry–Pérot cavities for narrow laser line widths Precision laser spectroscopy Convenient counting of optical frequencies using optical combs. In August 2004, NIST scientists demonstrated a chip-scale atomic clock. According to the researchers, the clock was believed to be one-hundredth the size of any other, it requires no more than 125 mW. This technology became available commercially in 2011. Ion trap. In April 2015, NASA announced that it planned to deploy a Deep Space Atomic Clock, a miniaturized, ultra-precise mercury-ion atomic clock, into outer space. NASA said. Since 1967, the International System of Units has defined the second as the duration of 9192631770 cycles of radiation corresponding to the transition between two energy levels of the ground state of the caesium-133 atom.
In 1997, the International Committee for Weights and Measures added that the preceding definition refers to a caesium atom at rest at a temperature of absolute zero. This definition makes the caesium oscillator the primary standard for time and frequency measurements, called the caesium standard; the definitions of other physical units, e.g. the volt and the metre, rely on the definition of the second. The actual time-reference of an atomic clock consists of an electronic oscillator operating at microwave frequency; the oscillator is arranged so that its frequency-determining components include an element that can be controlled by a feedback signal. The feedback signal keeps the oscillator tuned in resonance with the frequency of the electronic transition of caesium or rubidium; the core of the atomic clock is a tunable microwave cavity containing a gas. In a hydrogen maser clock the gas emits microwaves on a hyperfine transition, the field in the cavity oscillates, the cavity is tuned for maximum microwave amplitude.
Alternatively, in a caesium or rubidium clock, the beam or gas absorbs microwaves and the cavity contains an electronic amplifier to make it oscillate. For both types the atoms in the gas
An astronomical interferometer is an array of separate telescopes, mirror segments, or radio telescope antennas that work together as a single telescope to provide higher resolution images of astronomical objects such as stars and galaxies by means of interferometry. The advantage of this technique is that it can theoretically produce images with the angular resolution of a huge telescope with an aperture equal to the separation between the component telescopes; the main drawback is. Thus it is useful for fine resolution of more luminous astronomical objects, such as close binary stars. Another drawback is that the maximum angular size of a detectable emission source is limited by the minimum gap between detectors in the collector array. Interferometry is most used in radio astronomy, in which signals from separate radio telescopes are combined. A mathematical signal processing technique called aperture synthesis is used to combine the separate signals to create high-resolution images. In Very Long Baseline Interferometry radio telescopes separated by thousands of kilometers are combined to form a radio interferometer with a resolution which would be given by a hypothetical single dish with an aperture thousands of kilometers in diameter.
At the shorter wavelengths used in infrared astronomy and optical astronomy it is more difficult to combine the light from separate telescopes, because the light must be kept coherent within a fraction of a wavelength over long optical paths, requiring precise optics. Practical infrared and optical astronomical interferometers have only been developed, are at the cutting edge of astronomical research. At optical wavelengths, aperture synthesis allows the atmospheric seeing resolution limit to be overcome, allowing the angular resolution to reach the diffraction limit of the optics. Astronomical interferometers can produce higher resolution astronomical images than any other type of telescope. At radio wavelengths, image resolutions of a few micro-arcseconds have been obtained, image resolutions of a fractional milliarcsecond have been achieved at visible and infrared wavelengths. One simple layout of an astronomical interferometer is a parabolic arrangement of mirror pieces, giving a complete reflecting telescope but with a "sparse" or "dilute" aperture.
In fact the parabolic arrangement of the mirrors is not important, as long as the optical path lengths from the astronomical object to the beam combiner are the same as would be given by the complete mirror case. Instead, most existing arrays use a planar geometry, Labeyrie's hypertelescope will use a spherical geometry. One of the first uses of optical interferometry was applied by the Michelson stellar interferometer on the Mount Wilson Observatory's reflector telescope to measure the diameters of stars; the red giant star Betelgeuse was the first to have its diameter determined in this way on December 13, 1920. In the 1940s radio interferometry was used to perform the first high resolution radio astronomy observations. For the next three decades astronomical interferometry research was dominated by research at radio wavelengths, leading to the development of large instruments such as the Very Large Array and the Atacama Large Millimeter Array. Optical/infrared interferometry was extended to measurements using separated telescopes by Johnson and Townes in the infrared and by Labeyrie in the visible.
In the late 1970s improvements in computer processing allowed for the first "fringe-tracking" interferometer, which operates fast enough to follow the blurring effects of astronomical seeing, leading to the Mk I, II and III series of interferometers. Similar techniques have now been applied at other astronomical telescope arrays, including the Keck Interferometer and the Palomar Testbed Interferometer. In the 1980s the aperture synthesis interferometric imaging technique was extended to visible light and infrared astronomy by the Cavendish Astrophysics Group, providing the first high resolution images of nearby stars. In 1995 this technique was demonstrated on an array of separate optical telescopes for the first time, allowing a further improvement in resolution, allowing higher resolution imaging of stellar surfaces. Software packages such as BSMEM or MIRA are used to convert the measured visibility amplitudes and closure phases into astronomical images; the same techniques have now been applied at a number of other astronomical telescope arrays, including the Navy Prototype Optical Interferometer, the Infrared Spatial Interferometer and the IOTA array.
A number of other interferometers have made closure phase measurements and are expected to produce their first images soon, including the VLTI, the CHARA array and Le Coroller and Dejonghe's Hypertelescope prototype. If completed, the MRO Interferometer with up to ten movable telescopes will produce among the first higher fidelity images from a long baseline interferometer; the Navy Optical Interferometer took the first step in this direction in 1996, achieving 3-way synthesis of an image of Mizar. Astronomical interferometry is principally conducted using Michelson interferometers; the principal operational interferometric observatories which use this type of instrumentation include VLTI, NPOI, CHARA. Current projects will use interferometers to search for extrasolar planets, either by astrometric measurements of the reciprocal motion of the star, through the use of nulling (as will be used by the Keck
In radio-frequency engineering, a transmission line is a specialized cable or other structure designed to conduct alternating current of radio frequency, that is, currents with a frequency high enough that their wave nature must be taken into account. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas, distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses; this article covers two-conductor transmission line such as parallel line, coaxial cable and microstrip. Some sources refer to waveguide, dielectric waveguide, optical fibre as transmission line, however these lines require different analytical techniques and so are not covered by this article. Ordinary electrical cables suffice to carry low frequency alternating current, such as mains power, which reverses direction 100 to 120 times per second, audio signals. However, they cannot be used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses.
Radio frequency currents tend to reflect from discontinuities in the cable such as connectors and joints, travel back down the cable toward the source. These reflections act as bottlenecks. Transmission lines use specialized construction, impedance matching, to carry electromagnetic signals with minimal reflections and power losses; the distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance, called the characteristic impedance, to prevent reflections. Types of transmission line include parallel line, coaxial cable, planar transmission lines such as stripline and microstrip; the higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength. At microwave frequencies and above, power losses in transmission lines become excessive, waveguides are used instead, which function as "pipes" to confine and guide the electromagnetic waves.
Some sources define waveguides as a type of transmission line. At higher frequencies, in the terahertz and visible ranges, waveguides in turn become lossy, optical methods, are used to guide electromagnetic waves; the theory of sound wave propagation is similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are used to build structures to conduct acoustic waves. Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell, Lord Kelvin and Oliver Heaviside. In 1855 Lord Kelvin formulated a diffusion model of the current in a submarine cable; the model predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable. In 1885 Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations. In many electric circuits, the length of the wires connecting the components can for the most part be ignored; that is, the voltage on the wire at a given time can be assumed to be the same at all points.
However, when the voltage changes in a time interval comparable to the time it takes for the signal to travel down the wire, the length becomes important and the wire must be treated as a transmission line. Stated another way, the length of the wire is important when the signal includes frequency components with corresponding wavelengths comparable to or less than the length of the wire. A common rule of thumb is that the cable or wire should be treated as a transmission line if the length is greater than 1/10 of the wavelength. At this length the phase delay and the interference of any reflections on the line become important and can lead to unpredictable behaviour in systems which have not been designed using transmission line theory. For the purposes of analysis, an electrical transmission line can be modelled as a two-port network, as follows: In the simplest case, the network is assumed to be linear, the two ports are assumed to be interchangeable. If the transmission line is uniform along its length its behaviour is described by a single parameter called the characteristic impedance, symbol Z0.
This is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of wires, about 300 ohms for a common type of untwisted pair used in radio transmission; when sending power down a transmission line, it is desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z0, in which case the transmission line is said to be matched; some of the power, fed into a transmission line is lost because of its resistance. This effect is called resistive loss. At high frequencies, another effect cal
Russia the Russian Federation, is a transcontinental country in Eastern Europe and North Asia. At 17,125,200 square kilometres, Russia is by far or by a considerable margin the largest country in the world by area, covering more than one-eighth of the Earth's inhabited land area, the ninth most populous, with about 146.77 million people as of 2019, including Crimea. About 77 % of the population live in the European part of the country. Russia's capital, Moscow, is one of the largest cities in the world and the second largest city in Europe. Extending across the entirety of Northern Asia and much of Eastern Europe, Russia spans eleven time zones and incorporates a wide range of environments and landforms. From northwest to southeast, Russia shares land borders with Norway, Estonia, Latvia and Poland, Ukraine, Azerbaijan, China and North Korea, it shares maritime borders with Japan by the Sea of Okhotsk and the U. S. state of Alaska across the Bering Strait. However, Russia recognises two more countries that border it, Abkhazia and South Ossetia, both of which are internationally recognized as parts of Georgia.
The East Slavs emerged as a recognizable group in Europe between the 3rd and 8th centuries AD. Founded and ruled by a Varangian warrior elite and their descendants, the medieval state of Rus arose in the 9th century. In 988 it adopted Orthodox Christianity from the Byzantine Empire, beginning the synthesis of Byzantine and Slavic cultures that defined Russian culture for the next millennium. Rus' disintegrated into a number of smaller states; the Grand Duchy of Moscow reunified the surrounding Russian principalities and achieved independence from the Golden Horde. By the 18th century, the nation had expanded through conquest and exploration to become the Russian Empire, the third largest empire in history, stretching from Poland on the west to Alaska on the east. Following the Russian Revolution, the Russian Soviet Federative Socialist Republic became the largest and leading constituent of the Union of Soviet Socialist Republics, the world's first constitutionally socialist state; the Soviet Union played a decisive role in the Allied victory in World War II, emerged as a recognized superpower and rival to the United States during the Cold War.
The Soviet era saw some of the most significant technological achievements of the 20th century, including the world's first human-made satellite and the launching of the first humans in space. By the end of 1990, the Soviet Union had the world's second largest economy, largest standing military in the world and the largest stockpile of weapons of mass destruction. Following the dissolution of the Soviet Union in 1991, twelve independent republics emerged from the USSR: Russia, Belarus, Uzbekistan, Azerbaijan, Kyrgyzstan, Tajikistan and the Baltic states regained independence: Estonia, Lithuania, it is governed as a federal semi-presidential republic. Russia's economy ranks as the twelfth largest by nominal GDP and sixth largest by purchasing power parity in 2018. Russia's extensive mineral and energy resources are the largest such reserves in the world, making it one of the leading producers of oil and natural gas globally; the country is one of the five recognized nuclear weapons states and possesses the largest stockpile of weapons of mass destruction.
Russia is a great power as well as a regional power and has been characterised as a potential superpower. It is a permanent member of the United Nations Security Council and an active global partner of ASEAN, as well as a member of the Shanghai Cooperation Organisation, the G20, the Council of Europe, the Asia-Pacific Economic Cooperation, the Organization for Security and Co-operation in Europe, the World Trade Organization, as well as being the leading member of the Commonwealth of Independent States, the Collective Security Treaty Organization and one of the five members of the Eurasian Economic Union, along with Armenia, Belarus and Kyrgyzstan; the name Russia is derived from Rus', a medieval state populated by the East Slavs. However, this proper name became more prominent in the history, the country was called by its inhabitants "Русская Земля", which can be translated as "Russian Land" or "Land of Rus'". In order to distinguish this state from other states derived from it, it is denoted as Kievan Rus' by modern historiography.
The name Rus itself comes from the early medieval Rus' people, Swedish merchants and warriors who relocated from across the Baltic Sea and founded a state centered on Novgorod that became Kievan Rus. An old Latin version of the name Rus' was Ruthenia applied to the western and southern regions of Rus' that were adjacent to Catholic Europe; the current name of the country, Россия, comes from the Byzantine Greek designation of the Rus', Ρωσσία Rossía—spelled Ρωσία in Modern Greek. The standard way to refer to citizens of Russia is rossiyane in Russian. There are two Russian words which are commonly
The Submillimeter Array consists of eight 6-meter diameter radio telescopes arranged as an interferometer for submillimeter wavelength observations. It is the first purpose-built submillimeter interferometer, constructed after successful interferometry experiments using the pre-existing 15-meter James Clerk Maxwell Telescope and 10.4-meter Caltech Submillimeter Observatory as an interferometer. All three of these observatories are located at Mauna Kea Observatory on Mauna Kea and can be operated together as a ten element interferometer in the 230 and 345 GHz bands; the baseline lengths presently in use range from 16 to 508 meters, up to 783 meters for eSMA operations. The radio frequencies accessible to this telescope range from 180–418 gigahertz which includes rotational transitions of dozens of molecular species as well as continuum emission from interstellar dust grains. Although the array is capable of operating both day and night, most of the observations take place at nighttime when the atmospheric phase stability is best.
The SMA is jointly operated by the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics. The SMA is a multi-purpose instrument; the SMA excels at observations of dust and gas with temperatures only a few tens of kelvins above absolute zero. Objects with such temperatures emit the bulk of their radiation at wavelengths between a few hundred micrometers and a few millimeters, the wavelength range in which the SMA can observe. Observed classes of objects include star-forming molecular clouds in our own and other galaxies redshifted galaxies, evolved stars, the Galactic Center. Bodies in the Solar System, such as planets, asteroids and moons, are observed; the SMA has been used to discover. It was the first radio telescope to resolve Charon as separate objects; the SMA is a part of the Event Horizon Telescope, which observes nearby supermassive black holes with an angular resolution comparable to the size of the object's event horizon. Atacama Large Millimeter Array operating in Chile Submillimeter Array website Smithsonian Astrophysical Observatory website Academia Sinica Institute of Astronomy and Astrophysics website