In electronics and telecommunications, modulation is the process of varying one or more properties of a periodic waveform, called the carrier signal, with a modulating signal that contains information to be transmitted. Most radio systems in the 20th century used frequency modulation or amplitude modulation for radio broadcast. A modulator is a device. A demodulator is a device that performs the inverse of modulation. A modem can perform both operations; the aim of analog modulation is to transfer an analog baseband signal, for example an audio signal or TV signal, over an analog bandpass channel at a different frequency, for example over a limited radio frequency band or a cable TV network channel. The aim of digital modulation is to transfer a digital bit stream over an analog communication channel, for example over the public switched telephone network or over a limited radio frequency band. Analog and digital modulation facilitate frequency division multiplexing, where several low pass information signals are transferred over the same shared physical medium, using separate passband channels.
The aim of digital baseband modulation methods known as line coding, is to transfer a digital bit stream over a baseband channel a non-filtered copper wire such as a serial bus or a wired local area network. The aim of pulse modulation methods is to transfer a narrowband analog signal, for example, a phone call over a wideband baseband channel or, in some of the schemes, as a bit stream over another digital transmission system. In music synthesizers, modulation may be used to synthesize waveforms with an extensive overtone spectrum using a small number of oscillators. In this case, the carrier frequency is in the same order or much lower than the modulating waveform. In analog modulation, the modulation is applied continuously in response to the analog information signal. Common analog modulation techniques include: Amplitude modulation Double-sideband modulation Double-sideband modulation with carrier Double-sideband suppressed-carrier transmission Double-sideband reduced carrier transmission Single-sideband modulation Single-sideband modulation with carrier Single-sideband modulation suppressed carrier modulation Vestigial sideband modulation Quadrature amplitude modulation Angle modulation, constant envelope Frequency modulation Phase modulation Transpositional Modulation, in which the waveform inflection is modified resulting in a signal where each quarter cycle is transposed in the modulation process.
TM is a pseudo-analog modulation. Where an AM carrier carries a phase variable phase f. TM is f. Digital modulation methods can be considered as digital-to-analog conversion and the corresponding demodulation or detection as analog-to-digital conversion; the changes in the carrier signal are chosen from a finite number of M alternative symbols. A simple example: A telephone line is designed for transferring audible sounds, for example and not digital bits. Computers may, communicate over a telephone line by means of modems, which are representing the digital bits by tones, called symbols. If there are four alternative symbols, the first symbol may represent the bit sequence 00, the second 01, the third 10 and the fourth 11. If the modem plays a melody consisting of 1000 tones per second, the symbol rate is 1000 symbols/second, or 1000 baud. Since each tone represents a message consisting of two digital bits in this example, the bit rate is twice the symbol rate, i.e. 2000 bits per second. This is similar to the technique used by dial-up modems as opposed to DSL modems.
According to one definition of digital signal, the modulated signal is a digital signal. According to another definition, the modulation is a form of digital-to-analog conversion. Most textbooks would consider digital modulation schemes as a form of digital transmission, synonymous to data transmission; the most fundamental digital modulation techniques are based on keying: PSK: a finite number of phases are used. FSK: a finite number of frequencies are used. ASK: a finite number of amplitudes are used. QAM: a finite number of at least two phases and at least two amplitudes are used. In QAM, an in-phase signal and a quadrature phase signal are amplitude modulated with a finite number of amplitudes and summed, it can be seen as a two-channel system, each channel using ASK. The resulting signal is equivalent to a combination of PSK and ASK. In all of the above methods, each of these phases, frequencies or amplitudes are assigned a u
Extremely high frequency
High frequency is the International Telecommunication Union designation for the band of radio frequencies in the electromagnetic spectrum from 30 to 300 gigahertz. It lies between the super high frequency band, the far infrared band, the lower part of, referred to as the terahertz gap. Radio waves in this band have wavelengths from ten to one millimetre, so it is called the millimetre band and radiation in this band is called millimetre waves, sometimes abbreviated MMW or mmW. Millimetre-length electromagnetic waves were first investigated in the 1890s by Indian scientist Jagadish Chandra Bose. Compared to lower bands, radio waves in this band have high atmospheric attenuation: they are absorbed by the gases in the atmosphere. Therefore, they have a short range and can only be used for terrestrial communication over about a kilometer. Absorption by humidity in the atmosphere is significant except in desert environments, attenuation by rain is a serious problem over short distances; however the short propagation range allows smaller frequency reuse distances than lower frequencies.
The short wavelength allows modest size antennas to have a small beam width, further increasing frequency reuse potential. Millimeter waves propagate by line-of-sight paths, they are not reflected by the ionosphere nor do they travel along the Earth as ground waves as lower frequency radio waves do. At typical power densities they are blocked by building walls and suffer significant attenuation passing through foliage. Absorption by atmospheric gases is a significant factor throughout the band and increases with frequency. However, it is maximum at a few specific absorption lines those of oxygen at 60 GHz and water vapor at 24 GHz and 184 GHz. At frequencies in the "windows" between these absorption peaks, millimeter waves have much less atmospheric attenuation and greater range, so many applications use these frequencies. Millimeter wavelengths are the same order of size as raindrops, so precipitation causes additional attenuation due to scattering as well as absorption; the high free space loss and atmospheric absorption limits useful propagation to a few kilometers.
Thus, they are useful for densely packed communications networks such as personal area networks that improve spectrum utilization through frequency reuse. Millimeter waves show "optical" propagation characteristics and can be reflected and focused by small metal surfaces and dielectric lenses around 5 to 30 cm diameter; because their wavelengths are much smaller than the equipment that manipulates them, the techniques of geometric optics can be used. Diffraction is less than at lower frequencies. At millimeter wavelengths, surfaces appear rougher so diffuse reflection increases. Multipath propagation reflection from indoor walls and surfaces, causes serious fading. Doppler shift of frequency can be significant at pedestrian speeds. In portable devices, shadowing due to the human body is a problem. Since the waves penetrate clothing and their small wavelength allows them to reflect from small metal objects they are used in millimeter wave scanners for airport security scanning; this band is used in radio astronomy and remote sensing.
Ground-based radio astronomy is limited to high altitude sites such as Kitt Peak and Atacama Large Millimeter Array due to atmospheric absorption issues. Satellite-based remote sensing near 60 GHz can determine temperature in the upper atmosphere by measuring radiation emitted from oxygen molecules, a function of temperature and pressure; the ITU non-exclusive passive frequency allocation at 57–59.3 GHz is used for atmospheric monitoring in meteorological and climate sensing applications and is important for these purposes due to the properties of oxygen absorption and emission in Earth's atmosphere. Operational U. S. satellite sensors such as the Advanced Microwave Sounding Unit on one NASA satellite and four NOAA satellites and the special sensor microwave/imager on Department of Defense satellite F-16 make use of this frequency range. In the United States, the band 36.0 – 40.0 GHz is used for licensed high-speed microwave data links, the 60 GHz band can be used for unlicensed short range data links with data throughputs up to 2.5 Gbit/s.
It is used in flat terrain. The 71–76, 81–86 and 92–95 GHz bands are used for point-to-point high-bandwidth communication links; these higher frequencies do not suffer from oxygen absorption, but require a transmitting license in the US from the Federal Communications Commission. There are plans for 10 Gbit/s links using these frequencies as well. In the case of the 92–95 GHz band, a small 100 MHz range has been reserved for space-borne radios, limiting this reserved range to a transmission rate of under a few gigabits per second; the band is undeveloped and available for use in a broad range of new products and services, including high-speed, point-to-point wireless local area networks and broadband Internet access. WirelessHD is another recent technology. Directional, "pencil-beam" signal characteristics permit different systems to operate close to one another without causing interference. Potential applications include radar systems with high resolution; the Wi-Fi standard IEEE 802.11ad operates in the 60 GHz spectrum to achieve data transfer rates as high as 7 Gbit/s.
Uses of the millimeter wave bands include point-to-point communications, intersatellite links, point-to-multipoint communications. There are tentative plans to use millimeter waves in future 5G mobile phones. In addition, use of millimeter wave
RMIT University is an Australian public research university located in Melbourne, Victoria. Founded by Francis Ormond in 1887, RMIT began as a night school offering classes in art and technology, in response to the industrial revolution in Australia, it was a private college for more than a hundred years before merging with the Phillip Institute of Technology to become a public university in 1992. It has an enrolment of around 87,000 higher and vocational education students, making it the largest dual-sector education provider in Australia. With an annual revenue of around A$1.3 billion, it is one of the wealthiest universities in Australia. It is rated a five star university by Quacquarelli Symonds and is ranked 17th in the World for art and design subjects in the QS World University Rankings, making it the top art and design university in Australia, its main campus is situated on the northern edge of the historic Hoddle Grid in the city centre of Melbourne. It has two satellite campuses in the northern suburbs of Brunswick and Bundoora and a training site, situated on the Williams base of the Royal Australian Air Force, in the western suburb of Point Cook.
Beyond Melbourne, it has a research site near the Grampians National Park in the rural city of Hamilton. Outside Australia, it has a presence in Europe. In Asia, it has two branch campuses in the Vietnamese cities of Hanoi and Ho Chi Minh City as well as teaching partnerships in China, Hong Kong, Indonesia and Sri Lanka. In Europe, it has a coordinating centre in the Catalonian city of Barcelona; the antecedent of RMIT, the Working Men's College of Melbourne, was founded by the Scottish-born grazier and politician The Hon. Francis Ormond in the 1880s. Planning began in 1881, with Ormond basing his model for the college on the Birkbeck Literary and Scientific Institution, Brighton College of Art, Royal College of Art, the Working Men's College of London. Ormond donated the sum of £5000 toward the foundation of the college, he was supported in the Victorian Parliament by Charles Pearson and in the Melbourne Trades Hall by William Murphy. The workers' unions of Melbourne rallied their members to match Ormond's donation.
The site for the college, on the corners of Bowen Street and La Trobe Street, opposite the Melbourne Public Library, was donated by the Victorian Government. The Working Men's College of Melbourne opened on 4 June 1887 with a gala ceremony at the Melbourne Town Hall, becoming the fifth tertiary education provider in Victoria, it took 320 enrollments on its opening night. It opened as a night school for instruction in "art and technology"—in the words of its founder—"especially to working men". Ormond was a firm believer in the transformative power of education and believed the college would be of "great importance and value" to the industrialisation of Melbourne during the late-19th century. In 1904, it was incorporated under the Companies Act as a private college. Between the turn of the 20th century and the 1930s, it expanded over the neighbouring Old Melbourne Gaol and constructed buildings for new art and radio schools, it made its first contribution to Australia's war effort through training of returned military personnel from World War I.
Following a petition by students, it changed its name to the Melbourne Technical College in 1934. The expanded college made a greater contribution to Australia's effort during World War II by training a sixth of the country's military personnel—including the majority of its Royal Australian Air Force communication officers, it trained 2000 civilians in munitions manufacturing and was commissioned by the Australian Government to manufacture military aircraft parts—including the majority of parts for the Beaufort Bomber. Following World War II, in 1954 it became the first Australian tertiary education provider to be awarded royal patronage for its service to the Commonwealth in the area of education and for its contribution to the war effort, it became the only higher education institution in Australia with the right of the prefix "Royal" along with the use of the Australian monarchy's regalia. Its name was changed to the Royal Melbourne Institute of Technology in 1960. During the mid-20th century, it was restructured as a provider of general higher and vocational education, pioneered dual sector education in Australia.
It began an engagement with Southeast Asia during this time. In 1979, the neighbouring Emily McPherson College of Domestic Economy joined with RMIT. After merging with the Phillip Institute of Technology in 1992, it became a public university by act of the Victorian Government under the Royal Melbourne Institute of Technology Act 1992. During the 1990s, the university underwent a rapid expansion and amalgamated with a number of nearby colleges and institutes; the Melbourne College of Decoration and Design joined RMIT in 1993, to create a new dedicated vocational design school, followed by the Melbourne College of Printing and Graphic Arts in 1995. That same year, it opened its first radial campus in Bundoora in the northern Melbourne metropolitan area. In 1999, it acquired the Melbourne Institute of Textiles campus in Brunswick in the inner-northern Melbourne metropolitan area for its vocational design schools. At the turn of the 21st century, it was invited by the Vietnamese Government
SIAE MICROELETTRONICA is an Italian multinational corporation and a global supplier of telecom network equipments. It provides wireless backhaul and fronthaul solutions that comprise microwave and millimeter wave radio systems, along with fiber optics transmission systems provided by its subsidiary SM Optics. Company products are deployed in more than 90 countries worldwide; the company is headquartered in Milan, with 26 regional offices around the globe. Edoardo Mascetti, after graduating in 1949 in Electrical Engineering at the Polytechnic University of Milan and working as electronic designer for Siemens, founded his own company and named it SIAE, acronym for Società Italiana Apparecchiature Elettroniche The company manufactured measurement systems such as electro-mechanical testers, analog oscilloscopes, telephone system analyzers and signal generators. SIAE's sales volume in 1955 was 6.224.000 Italian lire and doubled by the end of 1957. A 431A-model oscilloscope by SIAE was part of the synthesizer in the Studio di fonologia musicale di Radio Milano until its dismissal on 1983 and is on permanent display with the original study equipment at the Musical Instrument museum hosted at the Castello Sforzesco, Milan.
A few years after founding SIAE, Edoardo Mascetti co-founded in 1958 Microelettronica S.p. A. A company whose business was the design of telecommunication equipment for radio and landline systems and, located in a basement in Milan. In 1963, the two complementary companies were merged into SIAE MICROELETTRONICA S.p. A. and the headquarters was moved to the nearby town of Cologno Monzese, where a larger area was available to accommodate the new offices and manufacturing plant. The new company counted less than 50 employees and focused its business on telecommunication systems, which were rising thanks to the capillary diffusion of telephone systems in Italy. Analog multiplexing systems for telephone providers constituted the company's principal product. By the mid'60s, the company began promoting its products by advertising in technical journals as natural consequence of a growing business, thanks to its first large-scale commercialized radio transceiver: the 3-channel 3-B3 and the RT450, capable of aggregating 48 channels into UHF band.
The RT450 equipment was certified by it:Telettra under the commercial name H450 as fallback link for its high-capacity solutions. Power-line communication systems were manufactured by the company in those years along with the first fixed and mobile communication terminals in VHF band for vehicular communications and anti burglar alarm systems. By 1973, a whole new internal division was created for the design of television broadcasting equipments whose main customer was the national television company RAI; the first television products were based on thermo-ionic tubes though the improvement of solid state technologies soon replaced vacuum tubes. Similar improvements in printed board manufacturing made microstrip circuits a viable solution for increasingly-high microwave frequencies and in 1978 the RT12 radio equipment boasted the first direct-conversion 2.3 GHz synthesized modulator and could aggregate 120 telephone channels. In these years, the company manufactured the historical link to connect the Milan and Rome branches of the Corriere della Sera newspaper.
The employees were about a hundred though the company still remained a family-run business lead by the founder and a tight board of managers through the'70s. Computer-aided design of electronic circuits was approached and exploited to improve yield and reduce the design time for the critical high-frequency sections; the early'80s witnessed two intertwined aspects which boosted the activities: the digital revolution reached commercial radio links and the RT20 thus leveraged 4-QAM modulation in order to provide low-capacity links while the increased globalization opened up international markets and the company business expanded to Norway, Great Britain and to Europe. A quality control system was soon implemented to certify the improved production standards. In partnership with it:Telettra, the Company developed the multi channel radio network covering the national extents, named RIAM, for the national electric company, Enel; the radio equipments were governed by a microprocessor, in the wake of the widespread usage of these new components which offered a huge range of new possibilities for coordinating radio components when compared to traditional dedicated circuits.
With the increased demand for traffic, higher frequencies were needed and in the second half of the 80's the company commercialized its 18 GHz radio transceiver with a capacity up to 2Mbit/s, based on specifications of Enel. A 13 GHz equipment with 4Mbit/s was instead first provided to the Mercury operator in the UK. Thin film manufacturing techniques for printed circuit boards were adopted during the 80s by the company for its microwave products beyond 10 GHz with ad-hoc equipments and production lines and soon upgraded to chip-and-wire technologies in white rooms. Documents show. In 1986 the Company introduced the "split-mount" configuration, where an indoor unit is connected to an outdoor unit ODU; the IDU provides the network interfaces and carries out the baseband tasks while communicates with the ODU by means of an intermediate frequ
Digital signal processing
Digital signal processing is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech processing, sonar and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control systems, biomedical engineering, among others. DSP can involve linear or nonlinear operations. Nonlinear signal processing is related to nonlinear system identification and can be implemented in the time and spatio-temporal domains; the application of digital computation to signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression.
DSP is applicable to static data. To digitally analyze and manipulate an analog signal, it must be digitized with an analog-to-digital converter. Sampling is carried out in two stages and quantization. Discretization means that the signal is divided into equal intervals of time, each interval is represented by a single measurement of amplitude. Quantization means. Rounding real numbers to integers is an example; the Nyquist–Shannon sampling theorem states that a signal can be reconstructed from its samples if the sampling frequency is greater than twice the highest frequency component in the signal. In practice, the sampling frequency is significantly higher than twice the Nyquist frequency. Theoretical DSP analyses and derivations are performed on discrete-time signal models with no amplitude inaccuracies, "created" by the abstract process of sampling. Numerical methods require a quantized signal, such as those produced by an ADC; the processed result might be a set of statistics. But it is another quantized signal, converted back to analog form by a digital-to-analog converter.
In DSP, engineers study digital signals in one of the following domains: time domain, spatial domain, frequency domain, wavelet domains. They choose the domain in which to process a signal by making an informed assumption as to which domain best represents the essential characteristics of the signal and the processing to be applied to it. A sequence of samples from a measuring device produces a temporal or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain representation; the most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering. Digital filtering consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters. Linear filters satisfy the superposition principle, i.e. if an input is a weighted linear combination of different signals, the output is a weighted linear combination of the corresponding output signals.
A causal filter uses only previous samples of the output signals. A non-causal filter can be changed into a causal filter by adding a delay to it. A time-invariant filter has constant properties over time. A stable filter produces an output that converges to a constant value with time, or remains bounded within a finite interval. An unstable filter can produce an output that grows without bounds, with bounded or zero input. A finite impulse response filter uses only the input signals, while an infinite impulse response filter uses both the input signal and previous samples of the output signal. FIR filters are always stable. A filter can be represented by a block diagram, which can be used to derive a sample processing algorithm to implement the filter with hardware instructions. A filter may be described as a difference equation, a collection of zeros and poles or an impulse response or step response; the output of a linear digital filter to any given input may be calculated by convolving the input signal with the impulse response.
Signals are converted from time or space domain to the frequency domain through use of the Fourier transform. The Fourier transform converts the time or space information to a magnitude and phase component of each frequency. With some applications, how the phase varies with frequency can be a significant consideration. Where phase is unimportant the Fourier transform is converted to the power spectrum, the magnitude of each frequency component squared; the most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. Frequency domain analysis is called spectrum- or spectral analysis. Filtering in non-realtime work can be achieved in the frequency domain, applying the filter and converting back to the time domain; this can be an efficient implementation and can g
In fiber-optic communications, wavelength-division multiplexing is a technology which multiplexes a number of optical carrier signals onto a single optical fiber by using different wavelengths of laser light. This technique enables bidirectional communications over one strand of fiber, as well as multiplication of capacity; the term wavelength-division multiplexing is applied to an optical carrier, described by its wavelength, whereas frequency-division multiplexing applies to a radio carrier, more described by frequency. This is purely conventional because frequency communicate the same information. A WDM system uses a multiplexer at the transmitter to join the several signals together and a demultiplexer at the receiver to split them apart. With the right type of fiber, it is possible to have a device that does both and can function as an optical add-drop multiplexer; the optical filtering devices used have conventionally been etalons. As there are three different WDM types, whereof one is called "WDM", the notation "xWDM" is used when discussing the technology as such.
The concept was first published in 1978, by 1980 WDM systems were being realized in the laboratory. The first WDM systems combined only two signals. Modern systems can handle 160 signals and can thus expand a basic 100 Gbit/s system over a single fiber pair to over 16 Tbit/s. A system of 320 channels is present WDM systems are popular with telecommunications companies because they allow them to expand the capacity of the network without laying more fiber. By using WDM and optical amplifiers, they can accommodate several generations of technology development in their optical infrastructure without having to overhaul the backbone network. Capacity of a given link can be expanded by upgrading the multiplexers and demultiplexers at each end; this is done by use of optical-to-electrical-to-optical translation at the edge of the transport network, thus permitting interoperation with existing equipment with optical interfaces. Most WDM systems operate on single-mode fiber optical cables. Certain forms of WDM can be used in multi-mode fiber cables which have core diameters of 50 or 62.5 µm.
Early WDM systems were complicated to run. However, recent standardization and better understanding of the dynamics of WDM systems have made WDM less expensive to deploy. Optical receivers, in contrast to laser sources, tend to be wideband devices. Therefore, the demultiplexer must provide the wavelength selectivity of the receiver in the WDM system. WDM systems are divided into three different wavelength patterns: normal and dense. Normal WDM uses the two normal wavelengths 1550 on one fiber. Coarse WDM provides up to 16 channels across multiple transmission windows of silica fibers. Dense WDM uses the C-Band transmission window but with denser channel spacing. Channel plans vary, but a typical DWDM system would use 40 channels at 100 GHz spacing or 80 channels with 50 GHz spacing; some technologies are capable of 12.5 GHz spacing. New amplification options enable the extension of the usable wavelengths to the L-band, more or less doubling these numbers. Coarse wavelength division multiplexing, in contrast to DWDM, uses increased channel spacing to allow less-sophisticated and thus cheaper transceiver designs.
To provide 16 channels on a single fiber, CWDM uses the entire frequency band spanning the second and third transmission windows including the critical frequencies where OH scattering may occur. OH-free silica fibers are recommended if the wavelengths between second and third transmission windows is to be used. Avoiding this region, the channels 47, 49, 51, 53, 55, 57, 59, 61 remain and these are the most used. With OS2 fibers the water peak problem is overcome, all possible 18 channels can be used. WDM, CWDM and DWDM are based on the same concept of using multiple wavelengths of light on a single fiber but differ in the spacing of the wavelengths, number of channels, the ability to amplify the multiplexed signals in the optical space. EDFA provide an efficient wideband amplification for the C-band, Raman amplification adds a mechanism for amplification in the L-band. For CWDM, wideband optical amplification is not available, limiting the optical spans to several tens of kilometres; the term coarse wavelength division multiplexing was generic and described a number of different channel configurations.
In general, the choice of channel spacings and frequency in these configurations precluded the use of erbium doped fiber amplifiers. Prior to the recent ITU standardization of the term, one common definition for CWDM was two or more signals multiplexed onto a single fiber, with one signal in the 1550 nm band and the other in the 1310 nm band. In 2002, the ITU standardized a channel spacing grid for CWDM using the wavelengths from 1270 nm through 1610 nm with a channel spacing of 20 nm. ITU G.694.2 was revised in 2003 to shift the channel centers by 1 nm so speaking, the center wavelengths are 1271 to 1611 nm. Many CWDM wavelengths below 1470 nm are considered unusable on older G.652 specification fibers, due to the increased attenuation in the 1270–1470 nm bands. Newer fibers which conform to the G.652. C and G.652. D standards, such as Corning SMF-28e and Samsung Widepass, near
Angular momentum of light
The angular momentum of light is a vector quantity that expresses the amount of dynamical rotation present in the electromagnetic field of the light. While traveling in a straight line, a beam of light can be rotating around its own axis; this rotation, while not visible to the naked eye, can be revealed by the interaction of the light beam with matter. There are two distinct forms of rotation of a light beam, one involving its polarization and the other its wavefront shape; these two forms of rotation are therefore associated with two distinct forms of angular momentum named light spin angular momentum and light orbital angular momentum. The total angular momentum of light and matter is conserved in time, it is well known that light, or more an electromagnetic wave, carries not only energy but momentum, a characteristic property of all objects in translational motion. The existence of this momentum becomes apparent in the “radiation pressure” phenomenon, in which a light beam transfers its momentum to an absorbing or scattering object, generating a mechanical pressure on it in the process.
Less known is the fact that light may carry angular momentum, a property of all objects in rotational motion. For example, a light beam can be rotating around its own axis. Again, the existence of this angular momentum can be made evident by transferring it to small absorbing or scattering particles, which are thus subject to an optical torque. For a light beam, one can distinguish two “forms of rotation”, the first associated with the dynamical rotation of the electric and magnetic fields around the propagation direction, the second with the dynamical rotation of light rays around the main beam axis; these two rotations are associated with two forms of angular momentum, namely SAM and OAM. However this distinction becomes blurred for focused or diverging beams, in the general case only the total angular momentum of a light field can be defined. An important limiting case in which the distinction is instead clear and unambiguous is that of a “paraxial” light beam, a well collimated beam in which all light rays only form small angles with the beam axis.
For such a beam, SAM is related with the optical polarization, in particular with the so-called circular polarization. OAM is related with the spatial field distribution, in particular with the wavefront helical shape. In addition to these two terms, if the origin of coordinates is located outside the beam axis, there is a third angular momentum contribution obtained as the cross-product of the beam position and its total momentum; this third term is called “orbital”, because it depends on the spatial distribution of the field. However, since its value is dependent from the choice of the origin, it is termed “external” orbital angular momentum, as opposed to the “internal” OAM appearing for helical beams. One used expression for the total angular momentum of an electromagnetic field is the following one, in which there is no explicit distinction between the two forms of rotation: J = ϵ 0 ∫ r × d 3 r, where E and B are the electric and magnetic fields ϵ 0 is the vacuum permittivity and we are using SI units.
However, another expression of the angular momentum arising from Noether’s theorem is the following one, in which there are two separate terms that may be associated with SAM and OAM: J = ϵ 0 ∫ d 3 r + ϵ 0 ∑ i = x, y, z ∫ d 3 r, where A is the vector potential of the magnetic field, the i-superscripted symbols denote the cartesian components of the corresponding vectors. These two expressions can be proved to be equivalent to each other for any electromagnetic field that vanishes fast enough outside a finite region of space; the two terms in the second expression however are physically ambiguous, as they are not gauge-invariant. A gauge-invariant version can be obtained by replacing the vector potential A and the electric field E with their “transverse” or radiative component A ⊥ and E ⊥, thus obtaining the following expression: J ⊥ = ϵ 0 ∫ ( E ⊥ × A