Reflectance of the surface of a material is its effectiveness in reflecting radiant energy. It is the fraction of incident electromagnetic power, reflected at an interface; the reflectance spectrum or spectral reflectance curve is the plot of the reflectance as a function of wavelength. The hemispherical reflectance of a surface, denoted R, is defined as R = Φ e r Φ e i, where Φer is the radiant flux reflected by that surface; the spectral hemispherical reflectance in frequency and spectral hemispherical reflectance in wavelength of a surface, denoted Rν and Rλ are defined as R ν = Φ e, ν r Φ e, ν i, R λ = Φ e, λ r Φ e, λ i, where Φe,νr is the spectral radiant flux in frequency reflected by that surface. The directional reflectance of a surface, denoted RΩ, is defined as R Ω = L e, Ω r L e, Ω i, where Le,Ωr is the radiance reflected by that surface; the spectral directional reflectance in frequency and spectral directional reflectance in wavelength of a surface, denoted RΩ,ν and RΩ,λ are defined as R Ω, ν = L e, Ω, ν r L e, Ω, ν i, R Ω, λ = L e, Ω, λ r L e, Ω, λ i, where Le,Ω,νr is the spectral radiance in frequency reflected by that surface.
For homogeneous and semi-infinite materials, reflectivity is the same as reflectance. Reflectivity is the square of the magnitude of the Fresnel reflection coefficient, the ratio of the reflected to incident electric field. For layered and finite media, according to the CIE, reflectivity is distinguished from reflectance by the fact that reflectivity is a value that applies to thick reflecting objects; when reflection occurs from thin layers of material, internal reflection effects can cause the reflectance to vary with surface thickness. Reflectivity is the limit value of reflectance. Another way to interpret this is that the reflectance is the fraction of electromagnetic power reflected from a specific sample, while reflectivity is a property of the material itself, which would be measured on a perfect machine if the material filled half of all space. Given that reflectance is a directional property, most surfaces can be divided into those that give specular reflection and those that give diffuse reflection: for specular surfaces, such as glass or polished metal, reflectance will be nearly zero at all angles except at the appropriate reflected angle.
Such surfaces are said to be Lambertian. Most real objects have some mixture of specular reflective properties. Reflection occurs when light moves from a medium with one index of refraction into a second medium with a different index of refraction. Specular reflection from a body of water is calculated by the Fresnel equat
The Fresnel equations describe the reflection and transmission of light when incident on an interface between different optical media. They were deduced by Augustin-Jean Fresnel, the first to understand that light is a transverse wave though no one realized that the "vibrations" of the wave were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface; when light strikes the interface between a medium with refractive index n1 and a second medium with refractive index n2, both reflection and refraction of the light may occur. The Fresnel equations describe the ratios of the reflected and transmitted waves' electric fields to the incident wave's electric field. Since these are complex ratios, they describe not only the relative amplitude, but phase shifts between the waves; the equations assume the interface between the media is flat and that the media are homogeneous and isotropic.
The incident light is assumed to be a plane wave, sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations. There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Unpolarized light has an equal amount of power in each of two linear polarizations; the s polarization refers to polarization of a wave's electric field normal to the plane of incidence. P polarization refers to polarization of the electric field in the plane of incidence. Although the reflectivity and transmission are dependent on polarization, at normal incidence there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients. In the diagram on the right, an incident plane wave in the direction of the ray IO strikes the interface between two media of refractive indices n1 and n2 at point O.
Part of the wave is reflected in the direction OR, part refracted in the direction OT. The angles that the incident and refracted rays make to the normal of the interface are given as θi, θr and θt, respectively; the relationship between these angles is given by the law of reflection: θ i = θ r, Snell's law: n 1 sin θ i = n 2 sin θ t. The behavior of light striking the interface is solved by considering the electric and magnetic fields that constitute an electromagnetic wave, the laws of electromagnetism, as shown below; the ratio of waves' electric field amplitudes are obtained, but in practice one is more interested in formulae which determine power coefficients, since power is what can be directly measured at optical frequencies. The power of a wave is proportional to the square of the electric field amplitude. We call the fraction of the incident power, reflected from the interface the reflectance R, the fraction, refracted into the second medium is called the transmittance T . Note that these are what would be measured right at each side of an interface and do not account for attenuation of a wave in an absorbing medium following transmission or reflection.
The reflectance for s-polarized light is R s = | Z 2 cos θ i − Z 1 cos θ t Z 2 cos θ i + Z 1 cos θ t | 2, while the reflectance for p-polarized light is R p = | Z 2 cos θ t − Z 1 cos θ i Z 2 cos θ t + Z 1 cos θ i
In radio-frequency engineering, a transmission line is a specialized cable or other structure designed to conduct alternating current of radio frequency, that is, currents with a frequency high enough that their wave nature must be taken into account. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas, distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses; this article covers two-conductor transmission line such as parallel line, coaxial cable and microstrip. Some sources refer to waveguide, dielectric waveguide, optical fibre as transmission line, however these lines require different analytical techniques and so are not covered by this article. Ordinary electrical cables suffice to carry low frequency alternating current, such as mains power, which reverses direction 100 to 120 times per second, audio signals. However, they cannot be used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses.
Radio frequency currents tend to reflect from discontinuities in the cable such as connectors and joints, travel back down the cable toward the source. These reflections act as bottlenecks. Transmission lines use specialized construction, impedance matching, to carry electromagnetic signals with minimal reflections and power losses; the distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance, called the characteristic impedance, to prevent reflections. Types of transmission line include parallel line, coaxial cable, planar transmission lines such as stripline and microstrip; the higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength. At microwave frequencies and above, power losses in transmission lines become excessive, waveguides are used instead, which function as "pipes" to confine and guide the electromagnetic waves.
Some sources define waveguides as a type of transmission line. At higher frequencies, in the terahertz and visible ranges, waveguides in turn become lossy, optical methods, are used to guide electromagnetic waves; the theory of sound wave propagation is similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are used to build structures to conduct acoustic waves. Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell, Lord Kelvin and Oliver Heaviside. In 1855 Lord Kelvin formulated a diffusion model of the current in a submarine cable; the model predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable. In 1885 Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations. In many electric circuits, the length of the wires connecting the components can for the most part be ignored; that is, the voltage on the wire at a given time can be assumed to be the same at all points.
However, when the voltage changes in a time interval comparable to the time it takes for the signal to travel down the wire, the length becomes important and the wire must be treated as a transmission line. Stated another way, the length of the wire is important when the signal includes frequency components with corresponding wavelengths comparable to or less than the length of the wire. A common rule of thumb is that the cable or wire should be treated as a transmission line if the length is greater than 1/10 of the wavelength. At this length the phase delay and the interference of any reflections on the line become important and can lead to unpredictable behaviour in systems which have not been designed using transmission line theory. For the purposes of analysis, an electrical transmission line can be modelled as a two-port network, as follows: In the simplest case, the network is assumed to be linear, the two ports are assumed to be interchangeable. If the transmission line is uniform along its length its behaviour is described by a single parameter called the characteristic impedance, symbol Z0.
This is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of wires, about 300 ohms for a common type of untwisted pair used in radio transmission; when sending power down a transmission line, it is desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z0, in which case the transmission line is said to be matched; some of the power, fed into a transmission line is lost because of its resistance. This effect is called resistive loss. At high frequencies, another effect cal
The target strength or acoustic size is a measure of the area of a sonar target. This is quantified as a number of decibels. For fish such as salmon, the target size varies with the length of the fish and a 5 cm fish could have a target strength of about -50 dB. Target strength is equal to 10 log10. Backscattering cross section is 4πσbs. "Introduction to the use of sonar systems for estimating fish biomass, FAO Fisheries Technical Paper No. 191, Revision 1, FAO 1982" Fisheries Acoustics Simmonds, E John and MacLennan, David N Blackwell Publishing. ISBN 978-0-632-05994-2 C. S. Clay & H. Medwin, Acoustical Oceanography target strength formula: TS=10log FOR Circular TS=10log B=KL*SIN K=2pi/landa
Gamma is the third letter of the Greek alphabet. In the system of Greek numerals it has a value of 3. In Ancient Greek, the letter gamma represented a voiced velar stop /ɡ/. In Modern Greek, this letter represents either a voiced velar fricative or a voiced palatal fricative. In the International Phonetic Alphabet and other modern Latin-alphabet based phonetic notations, it represents the voiced velar fricative; the Greek letter Gamma Γ was derived from the Phoenician letter for the /g/ phoneme, as such is cognate with Hebrew gimel ג. Based on its name, the letter has been interpreted as an abstract representation of a camel's neck, but this has been criticized as contrived, it is more that the letter is derived from an Egyptian hieroglyph representing a club or throwing stick. In Archaic Greece, the shape of gamma was closer to a classical lambda, while lambda retained the Phoenician L-shape. Letters that arose from the Greek gamma include Etruscan, Roman C and G, Runic kaunan ᚲ, Gothic geuua, the Coptic Ⲅ, the Cyrillic letters Г and Ґ.
The Ancient Greek /g/ phoneme was the voiced velar stop, continuing the reconstructed proto-Indo-European *g, *ǵ. The modern Greek phoneme represented by gamma is realized either as a voiced palatal fricative before a front vowel, or as a voiced velar fricative /ɣ/ in all other environments. Both in Ancient and in Modern Greek, before other velar consonants, gamma represents a velar nasal /ŋ/. A double gamma γγ represents the sequence /ŋɡ/ or /ŋɣ/; the gamma was added to the Latin alphabet, in the following forms: majuscule Ɣ, minuscule ɣ, superscript modifier letter ˠ. Lowercase Greek gamma is used in the Americanist phonetic notation and Uralic Phonetic Alphabet to indicate voiced consonants. In International Phonetic Alphabet, it represents the voiced velar fricative. In the International Phonetic Alphabet the minuscule letter is used to represent a voiced velar fricative and the superscript modifier letter is used to represent velarization, it is not to be confused with the character ɤ, which looks like a lowercase Latin gamma that lies above the baseline rather than crossing, which represents the close-mid back unrounded vowel.
In certain nonstandard variations of the IPA, the uppercase form is used. It is as a full-fledged majuscule and minuscule letter in the alphabets of some of languages of Africa such as Dagbani, Dinka and Ewe, Berber languages using the Berber Latin alphabet, it is sometimes used in the romanization of Pashto. The lowercase letter γ is used as a symbol for: Chromatic number of in graph theory Gamma radiation in nuclear physics The photon, the elementary particle of light and other electromagnetic radiation Surface energy in materials science The Lorentz factor in the theory of relativity In mathematics, the lower incomplete gamma function The heat capacity ratio Cp/Cv in thermodynamics The activity coefficient in thermodynamics The gyromagnetic ratio in electromagnetism Gamma waves in neuroscience Gamma motor neurons in neuroscience A non-SI metric unit of measure of mass equal to one microgram; this always-rare use is deprecated. A non-SI unit of measure of magnetic flux density, sometimes used in geophysics, equal to 1 nanotesla.
The power by which the luminance of an image is increased in gamma correction The Euler–Mascheroni constant In civil and mechanical engineering: Specific weight The shear rate of a fluid is represented by a lowercase gamma with a dot above it: γ ˙ Austenite, a metallic non-magnetic allotrope or solid solution of iron. The gamma carbon, the third carbon attached to a functional group in organic chemistry and biochemistry; the uppercase letter Γ is used as a symbol for: In mathematics, the gamma function is an extension of the factorial to complex numbers In mathematics, the upper incomplete gamma function The Christoffel symbols in differential geometry In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. Circulation in fluid mechanics As reflection coefficient in physics and electrical engineering The tape alphabet of a Turing machine The Feferman–Schütte ordinal Γ 0 The HTML entities for uppercase and lowercase gamma are &Gamma.
Greek GammaCoptic GammaLatin Gamma / phonetic GammaCJK Square GammaTechnical / Mathematical GammaThese characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style. Г, г - Ge G, g - Latin
Electrical engineering is a professional engineering discipline that deals with the study and application of electricity and electromagnetism. This field first became an identifiable occupation in the half of the 19th century after commercialization of the electric telegraph, the telephone, electric power distribution and use. Subsequently and recording media made electronics part of daily life; the invention of the transistor, the integrated circuit, brought down the cost of electronics to the point they can be used in any household object. Electrical engineering has now divided into a wide range of fields including electronics, digital computers, computer engineering, power engineering, telecommunications, control systems, radio-frequency engineering, signal processing and microelectronics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations such as hardware engineering, power electronics and waves, microwave engineering, electrochemistry, renewable energies, electrical materials science, much more.
See glossary of electrical and electronics engineering. Electrical engineers hold a degree in electrical engineering or electronic engineering. Practising engineers may be members of a professional body; such bodies include the Institute of Electrical and Electronics Engineers and the Institution of Engineering and Technology. Electrical engineers work in a wide range of industries and the skills required are variable; these range from basic circuit theory to the management skills required of a project manager. The tools and equipment that an individual engineer may need are variable, ranging from a simple voltmeter to a top end analyzer to sophisticated design and manufacturing software. Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, was the first to draw a clear distinction between magnetism and static electricity, he is credited with establishing the term "electricity". He designed the versorium: a device that detects the presence of statically charged objects.
In 1762 Swedish professor Johan Carl Wilcke invented a device named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle, of William Sturgeon who, in 1825 invented the electromagnet, of Joseph Henry and Edward Davy who invented the electrical relay in 1835, of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor, of Michael Faraday, of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism. In 1782 Georges-Louis Le Sage developed and presented in Berlin the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet.
This telegraph connected two rooms. It was an electrostatic telegraph. In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803-1804, he worked on electrical telegraphy and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva’s electrolyte telegraph system was innovative though it was influenced by and based upon two new discoveries made in Europe in 1800 – Alessandro Volta’s electric battery for generating an electric current and William Nicholson and Anthony Carlyle’s electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the 19th century. Practitioners had created a global electric telegraph network and the first professional electrical engineering institutions were founded in the UK and USA to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity.
Over 50 years he joined the new Society of Telegraph Engineers where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, from about 1890, wireless telegraphy. Practical applications and advances in such fields created an increasing need for standardised units of measure, they led to the international standardization of the units volt, coulomb, ohm and henry. This was achieved at an international conference in Chicago in 1893; the publication of these standards formed the basis of future advances in standardisation in various industries, in many countries, the definitions were recognized in relevant legislation. During these years, the study of electricity was considered to be a subfield of physics since the early electrical technology was considered electromechanical in nature; the Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882.
The first electrical engineering degree program was started at Massachusetts Institute of Technology in the physics department
General Services Administration
The General Services Administration, an independent agency of the United States government, was established in 1949 to help manage and support the basic functioning of federal agencies. GSA supplies products and communications for U. S. government offices, provides transportation and office space to federal employees, develops government-wide cost-minimizing policies and other management tasks. GSA employs about 12,000 federal workers and has an annual operating budget of $20.9 billion. GSA oversees $66 billion of procurement annually, it contributes to the management of about $500 billion in U. S. federal property, divided chiefly among 8,700 owned and leased buildings and a 215,000 vehicle motor pool. Among the real estate assets managed by GSA are the Ronald Reagan Building and International Trade Center in Washington, D. C. – the largest U. S. federal building after the Pentagon – and the Hart-Dole-Inouye Federal Center. GSA's business lines include the Federal Acquisition Service and the Public Buildings Service, as well as several Staff Offices including the Office of Government-wide Policy, the Office of Small Business Utilization, the Office of Mission Assurance.
As part of FAS, GSA's Technology Transformation Services helps federal agencies improve delivery of information and services to the public. Key initiatives include FedRAMP, Cloud.gov, the USAGov platform, Data.gov, Performance.gov, Challenge.gov. GSA is a member of the Procurement G6, an informal group leading the use of framework agreements and e-procurement instruments in public procurement. In 1947 President Harry Truman asked former President Herbert Hoover to lead what became known as the Hoover Commission to make recommendations to reorganize the operations of the federal government. One of the recommendations of the commission was the establishment of an "Office of the General Services." This proposed office would combine the responsibilities of the following organizations: U. S. Treasury Department's Bureau of Federal Supply U. S. Treasury Department's Office of Contract Settlement National Archives Establishment All functions of the Federal Works Agency, including the Public Buildings Administration and the Public Roads Administration War Assets AdministrationGSA became an independent agency on July 1, 1949, after the passage of the Federal Property and Administrative Services Act.
General Jess Larson, Administrator of the War Assets Administration, was named GSA's first Administrator. The first job awaiting Administrator Larson and the newly formed GSA was a complete renovation of the White House; the structure had fallen into such a state of disrepair by 1949 that one inspector of the time said the historic structure was standing "purely from habit." Larson explained the nature of the total renovation in depth by saying, "In order to make the White House structurally sound, it was necessary to dismantle, I mean dismantle, everything from the White House except the four walls, which were constructed of stone. Everything, except the four walls without a roof, was stripped down, that's where the work started." GSA worked with President Truman and First Lady Bess Truman to ensure that the new agency's first major project would be a success. GSA completed the renovation in 1952. In 1986 GSA headquarters, U. S. General Services Administration Building, located at Eighteenth and F Streets, NW, was listed on the National Register of Historic Places, at the time serving as Interior Department offices.
In 1960 GSA created the Federal Telecommunications System, a government-wide intercity telephone system. In 1962 the Ad Hoc Committee on Federal Office Space created a new building program to address obsolete office buildings in Washington, D. C. resulting in the construction of many of the offices that now line Independence Avenue. In 1970 the Nixon administration created the Consumer Product Information Coordinating Center, now part of USAGov. In 1974 the Federal Buildings Fund was initiated, allowing GSA to issue rent bills to federal agencies. In 1972 GSA established the Automated Data and Telecommunications Service, which became the Office of Information Resources Management. In 1973 GSA created the Office of Federal Management Policy. GSA's Office of Acquisition Policy centralized procurement policy in 1978. GSA was responsible for emergency preparedness and stockpiling strategic materials to be used in wartime until these functions were transferred to the newly-created Federal Emergency Management Agency in 1979.
In 1984 GSA introduced the federal government to the use of charge cards, known as the GMA SmartPay system. The National Archives and Records Administration was spun off into an independent agency in 1985; the same year, GSA began to provide governmentwide policy oversight and guidance for federal real property management as a result of an Executive Order signed by President Ronald Reagan. In 2003 the Federal Protective Service was moved to the Department of Homeland Security. In 2005 GSA reorganized to merge the Federal Supply Service and Federal Technology Service business lines into the Federal Acquisition Service. On April 3, 2009, President Barack Obama nominated Martha N. Johnson to serve as GSA Administrator. After a nine-month delay, the United States Senate confirmed her nomination on February 4, 2010. On April 2, 2012, Johnson resigned in the wake of a management-deficiency report that detailed improper payments for a 2010 "Western Regions" training conference put on by the Public Buildings Service in Las Vegas.
In July 1991 GSA contractors began the excavation of what is now the Ted Weiss Federal Building in New York City. The planning for that buildin